We use cookies and other similar technologies (Cookies) to enhance your experience and to provide you with relevant content and ads. By using our website, you are agreeing to the use of Cookies. You can change your settings at any time. Cookie Policy.


Blog: Barry Devlin Subscribe to this blog's RSS feed!

Barry Devlin

As one of the founders of data warehousing back in the mid-1980s, a question I increasingly ask myself over 25 years later is: Are our prior architectural and design decisions still relevant in the light of today's business needs and technological advances? I'll pose this and related questions in this blog as I see industry announcements and changes in way businesses make decisions. I'd love to hear your answers and, indeed, questions in the same vein.

About the author >

Dr. Barry Devlin is among the foremost authorities in the world on business insight and data warehousing. He was responsible for the definition of IBM's data warehouse architecture in the mid '80s and authored the first paper on the topic in the IBM Systems Journal in 1988. He is a widely respected consultant and lecturer on this and related topics, and author of the comprehensive book Data Warehouse: From Architecture to Implementation.

Barry's interest today covers the wider field of a fully integrated business, covering informational, operational and collaborative environments and, in particular, how to present the end user with an holistic experience of the business through IT. These aims, and a growing conviction that the original data warehouse architecture struggles to meet modern business needs for near real-time business intelligence (BI) and support for big data, drove Barry’s latest book, Business unIntelligence: Insight and Innovation Beyond Analytics, now available in print and eBook editions.

Barry has worked in the IT industry for more than 30 years, mainly as a Distinguished Engineer for IBM in Dublin, Ireland. He is now founder and principal of 9sight Consulting, specializing in the human, organizational and IT implications and design of deep business insight solutions.

Editor's Note: Find more articles and resources in Barry's BeyeNETWORK Expert Channel and blog. Be sure to visit today!

Just putting NoSQL in the title of a post on B-eye-Network might raise a few hackles ;-) but the growing popularity of the term and vaguely related phrases like big data, Hadoop and distributed file systems brings the topic regularly to the fore these days.  I'm often asked by BI practitioners: what is NoSQL and what can we do about it?

Broadly speaking, NoSQL is a rather loose term that groups together databases (and sometimes non-databases!) that do not use the relational model as a foundation.  And, like anything that is defined by what it's not, NoSQL ends up being on one hand a broad church and on the other a focal point for those who strongly resist the opposite view.  NoSQL is thus claimed by some not to be anti-SQL, and said to stand for "not only SQL".  But, let's avoid this particular minefield and focus on the broad church of data stores that gather together under the NoSQL banner.

David Bessemer, CTO of Composite Software, gives a nice list in his "Data Virtualization and NoSQL Data Stores" article: (1) Tabular/Columnar Data Stores, (2) Document Stores, (3) Graph Databases, (4) Key/Value Stores, (5) Object and Multi-value Databases and (6) Miscellaneous Sources.  He then discusses how (1) and (4), together with XML document stores--a subset of (2)--can be integrated using virtualization tools such as Composite.

There is another school of thought that favors importing such data (particularly textual data) into the data warehouse environment, either by first extracting keywords from it via text analytics or by converting it to XML or other "relational-friendly" formats.  In my view, there is a significant problem with this approach; namely that the volumes of data are so large and their rate of change so fast in many cases, that traditional ETL and Data Warehouse infrastructures will struggle to manage.  The virtualization approach thus makes more sense as the mass access mechanism for such big data.

But, it's also noticeable that Bessemer covers only 2.5 of his 6 classes in detail, saying that they are "particularly suited for the data virtualization platform".  So, what about the others?

In my May 2010 white paper, "Beyond the Data Warehouse: A Unified Information Store for Data and Content", sponsored by Attivio, I addressed this topic in some depth.  BI professionals need to look to what is emerging in the world of content management to see that soft information (also known by the oxymoronic term "unstructured information") is increasingly being analyzed and categorized by content management tools to extract business meaning and value on the fly, without needing to be brought into the data warehouse.  What's needed now is for content management and BI tool vendors to create the mechanism to join these two environments and create a common set of metadata that bridges the two.

This is also a form of virtualization, but the magic resides in the joint metadata.  Depending on your history and preferences, you can see this as an extension of the data warehouse to include soft information or an expansion of content management into relational data.  But, whatever you choose, the key point is to avoid duplicating NoSQL data stores into the data warehouse.

I'll be speaking at O'Reilly Media's big data oriented Strata Conference - Making Data Work - 1-3 February in Santa Clara, California. A keynote, The Heat Death of the Data Warehouse, Thursday, 3 February, 9:25am and an Exec Summit session, The Data-driven Business and Other Lessons from History, Tuesday, 1 February, 9:45am.  O'Reilly Media are offering a 25% discount code for readers, followers, and friends on conference registration:  str11fsd.  


Posted January 21, 2011 9:03 AM
Permalink | No Comments |

Leave a comment