We use cookies and other similar technologies (Cookies) to enhance your experience and to provide you with relevant content and ads. By using our website, you are agreeing to the use of Cookies. You can change your settings at any time. Cookie Policy.

Blog: William McKnight Subscribe to this blog's RSS feed!

William McKnight

Hello and welcome to my blog!

I will periodically be sharing my thoughts and observations on information management here in the blog. I am passionate about the effective creation, management and distribution of information for the benefit of company goals, and I'm thrilled to be a part of my clients' growth plans and connect what the industry provides to those goals. I have played many roles, but the perspective I come from is benefit to the end client. I hope the entries can be of some modest benefit to that goal. Please share your thoughts and input to the topics.

About the author >

William is the president of McKnight Consulting Group, a firm focused on delivering business value and solving business challenges utilizing proven, streamlined approaches in data warehousing, master data management and business intelligence, all with a focus on data quality and scalable architectures. William functions as strategist, information architect and program manager for complex, high-volume, full life-cycle implementations worldwide. William is a Southwest Entrepreneur of the Year finalist, a frequent best-practices judge, has authored hundreds of articles and white papers, and given hundreds of international keynotes and public seminars. His team's implementations from both IT and consultant positions have won Best Practices awards. He is a former IT Vice President of a Fortune company, a former software engineer, and holds an MBA. William is author of the book 90 Days to Success in Consulting. Contact William at wmcknight@mcknightcg.com.

Editor's Note: More articles and resources are available in William's BeyeNETWORK Expert Channel. Be sure to visit today!

Recently in Microsoft Category

This week, at the PASS Summit, Microsoft unveiled its inevitable "big data" strategy.  The world of big data is the new unchartered land in information management and the big vendors are jumping on board.  "New economy" giants like eBay, twitter, FaceBook and Google are the early adopters - and many even built the big data tools that everything is based on. 


It would be too easy to dismiss big data as a Valley-only phenomenon, and you shouldn't.  Microsoft's information management tools serve perhaps the widest ranging set of clients anywhere.  They've either made their move to "keep up with the Joneses" (Oracle had some big data announcements last week) or there must be some Global 2000 budgets in it.  The industry will not thrive without some of the latter and that's what I'm betting on.


There's vast utility in unstructured and machine-generated data (somehow tweets count in this category) and many reasons, starting with monetary, why, once a company finds some use for it, they will choose a big data tool like Hadoop rather than a relational database management system to store the data.  Yes, and even live with the tradeoffs of lack of ACID compliance, lack of transactions, lack of SQL (although this is eroding by the day), lack of schema sharing, the need to user-assemble (although this is also eroding) and node failures being a way of life.  Indeed, the "secret sauce" of Hadoop is the distribution of data and node recovery failure - RAID-like, but less costly.


It's better to play with this "hippy developed" (as one skeptic referred to it as) Hadoop than ignore it at this point.  That's what Microsoft has done.  Microsoft is working to deploy Hadoop on Windows and cloud-based Azure.  This could really work in Microsoft's big data land grab.  It's a hedge against going too hard-core into the open-source world.  It's comfortable Windows combined with Hadoop.  For the many, many fence-sitters out there, this is good timing.  Many want to trace movements of physical objects, trace web clicks and other Web 2.0 activity.  They want to do this without sacrificing enterprise standards they are used to with products like Windows and its management toolset.


Development will occur with the Yahoo-legacy Hortonworks and will go into Apache.  This announcement follows the development of the Sqoop-compatible Microsoft SQL Server Connector for Apache Hadoop.


A simultaneous Microsoft big data announcement was an ODBC Driver to Hive.  Hive was developed by FaceBook to make the data access to Hadoop easier than MapReduce.  Every day, FaceBook runs 150,000 jobs.  Only 500 are MapReduce, the rest are HiveQL.  HiveQL is SQL-like and, in some ways, actually exceeds SQL capabilities with complex types like associative arrays, lists and structure data types.  And soon, it will have an ODBC driver from Microsoft.


The announcements didn't coincide with any showable development so apparently there's still some work involved before we will have substantially more information, but it's definitely worth watching as a milestone in the big data journey.

Posted October 15, 2011 2:12 PM
Permalink | No Comments |

There are several new features that will be in SQL Server 2008 R2, which is due to be GA in May.  As someone who implements on multiple platforms, I'm constantly comparing platform capabilities and consequently have a hard time getting too excited about releases, but R2 is giving me some reason to be excited.

Like many of SQL Server's R2 releases, it builds on its corresponding R1.  SQL Server 2008 been commercially available since mid 2008.  From a data warehousing perspective, SQL Server has long been a choice for data marts, regardless of the data warehouse platform.  It has also been a data warehouse platform for the midmarket and occasionally a Fortune 100 company.  Some of the scalability concerns that have limited SQL Server's reach may be being answered in R2 with the added support for 256 processors.  This is quite a move up from 64 processors.  Also improving scalability will be the acquired DataAllegro technology, rebranded Parallel Data Warehouse, Microsoft's data warehouse appliance.

What I am most interested in and excited about is Master Data Services, Microsoft's entry into the crowding master data management market.  Microsoft is the first to make it a part of the DBMS package.  They are clearly targeting the Microsoft shops that are having master data issues.  I've had the CTP for some time (this is a worked over form of the product from Stratature, which Microsoft acquired in 2007) and have been able to exercise it and even see it implemented at a client.  While its capabilities are limited compared to its more mature competition, it has a lot of potential. Microsoft will be putting strong development effort behind Master Data Services.  Even today, it can certainly, with some effort, play the technical role in a MDM program.

And then there's Gemini or PowerPivot for Excel. Gemini is the new generation of Analysis Services.  Those who chagrin at the notion that Microsoft Excel is the #1 business intelligence tool (it is) will have a lot more concern now.  Bye-bye ProClarity interface.  We must embrace Excel.  I'm increasingly crafting procedures for IT's role with Excel and this need will only increase as Gemini will cause an even more fluid spreadsheet environment in shops.  Data security strategies are imperative. 

Gemini will also be a collaborative environment.  Everyone in a workgroup can cooperate in managing Excel.  Excel is certainly already mission critical and R2 will create even more possibilities to depend on Excel.  The Gemini server is SharePoint, another element gaining traction in the SQL Server family.

I am also completely impressed with the addition of the columnar, in-memory storage option for this downloaded data, called VeriPaq.  Some data just belongs in columnar, though most would not want to put all their data in this structure.  It's great for data where a lot of columnar functions will be done to it, as well as generally for those queries that don't return a lot of columns.  It's also great for compressing data.  I have a seminar on columnar and expect to be helping more clients effectively tier their data to this format now that SQL Server will be utilizing the option.

So, are you ready?  Are you on SQL Server 2008?  Are you ready to upgrade to Office 2010 to take advantage of Gemini?  SQL Server 2008 R2 will be one of the big BI stories of 2010.

Posted December 28, 2009 8:55 AM
Permalink | No Comments |

There is a lot of business intelligence news coming out of the Microsoft camp.  First, sooner than expected, they have released the SQL Server Fast Track Data Warehouse, which is the development of the Datallego acquisition and Microsoft's entry into the appliance marketplace.

Some time ago, they also announced Kilimanjaro, the next version of SQL Server, and Gemini, the next version of SQL Server Analysis Services.  I didn't rush out and blog this though because the release is not expected until 2010.  Still, it's exciting and worth thinking about now.  And who knows, maybe these will emerge sooner than expected.

I thought SQL Server 2005 was BI focused.  Now, Microsoft says Kilimanjaro will really be BI focused.  And they could be right.  With the addition of optional column-oriented storage capabilities and in-memory storage, they are addressing 2 very obvious ways the market has recently addressed performance.  Both options were built from scratch by Microsoft.  I look forward to seeing how the optimizer determines when to go columnar vs. not.  To-date, solutions have addressed it with disparate systems and obvious suboptimal performance of some of the queries within each system. 

I'm a fan of SSAS as long as the MOLAP option is not overused, but that's another story.  Stronger connectivity is promised between Gemini and Excel and that may really get the casual user's attention.  Gemini will be publishing data to SharePoint Server as well.  SharePoint is becoming a centerpiece of Microsoft BI activity.  With the publishing options,Gemini will have the Microsoft enterprise covered.

What will be the next game changer?


Posted April 28, 2009 4:47 PM
Permalink | No Comments |

Link to article. I guess my entry from May 24 was timlier than I thought. This really legitimizes master data management as a force. Stratature had not made great strides, but it does have a nice complement of the requisite MDM functionality that I discuss in my fullday MDM course including the hub, publish/subscribe, and modeling facilitation. Although Stratature hasn't made short-lists in my recent MDM strategies with Fortune clients, I expect its presence to increase now.

I expect more midsize companies to now get involved in MDM and, over time, for the price points for enterprise MDM software to settle.

Posted June 9, 2007 11:19 AM
Permalink | No Comments |


Search this blog
Categories ›
Archives ›
Recent Entries ›