Blog: Barry Devlin Subscribe to this blog's RSS feed!

Barry Devlin

As one of the founders of data warehousing back in the mid-1980s, a question I increasingly ask myself over 25 years later is: Are our prior architectural and design decisions still relevant in the light of today's business needs and technological advances? I'll pose this and related questions in this blog as I see industry announcements and changes in way businesses make decisions. I'd love to hear your answers and, indeed, questions in the same vein.

About the author >

Dr. Barry Devlin is among the foremost authorities in the world on business insight and data warehousing. He was responsible for the definition of IBM's data warehouse architecture in the mid '80s and authored the first paper on the topic in the IBM Systems Journal in 1988. He is a widely respected consultant and lecturer on this and related topics, and author of the comprehensive book Data Warehouse: From Architecture to Implementation.

Barry's interest today covers the wider field of a fully integrated business, covering informational, operational and collaborative environments and, in particular, how to present the end user with an holistic experience of the business through IT. These aims, and a growing conviction that the original data warehouse architecture struggles to meet modern business needs for near real-time business intelligence (BI) and support for big data, drove Barry’s latest book, Business unIntelligence: Insight and Innovation Beyond Analytics, now available in print and eBook editions.

Barry has worked in the IT industry for more than 30 years, mainly as a Distinguished Engineer for IBM in Dublin, Ireland. He is now founder and principal of 9sight Consulting, specializing in the human, organizational and IT implications and design of deep business insight solutions.

Editor's Note: Find more articles and resources in Barry's BeyeNETWORK Expert Channel and blog. Be sure to visit today!

January 2011 Archives

JackBe's CTO, John Crupi, and VP of Marketing, Chris Warner, created a definitional firestorm among BI experts at the BBBT on Friday.  A long-time Ajax and Enterprise Mashup Platform provider, JackBe has more recently begun to describe itself as a Real-Time Intelligence provider.  That was always going to be a phrase that generated excited discussion.

First, what is Real-Time?  In the case of JackBe, it relates more to immediate access, both in definition and use, to existing sources of data than to the more conventional BI use of the term, which focuses more on how current that data is.  As a mashup, JackBe's Presto product doesn't actually care how current the data it accesses is.  The source could be an operational application, a data warehouse, a spreadsheet, a web resource or whatever--clearly a wide range of data latency (and reliability,too!).  So, the important idea that BI practitioners have to get their heads around is that Real-Time in this context is about giving business users fast and nimble access to existing data sources.

As a mashup, and coming from the Web 2.0 world, the second thing we need to recognize is that JackBe allows end users to combine information in innovative ways into dashboard-like constructs themselves.  In function, mashups are similar to more traditional portals, but use the more flexible tooling and constructs of Web 2.0, enabling users to do more for themselves without calling on IT.  JackBe thus enables self-service BI, provided that accessible information resources already exist.  Presto provides the means to find those sources, the ability to link them together and the robust security required to ensure users can access only what they are allowed to.

As with all approaches to self-service business intelligence, the most challenging aspect for BI practitioners is to understand and even regulate the validity of the results produced.  Does it make logical business sense to combine sources A and B?  Does source A contain data from the same timeframe as source C?  Does profit margin in source B have the exact same definition as that in source D?  And so on.  These are the types of questions that lead to the creation of a data warehouse; resolving them leads to the typical delays in delivering data warehouses.

The bottom line is that JackBe provides a powerful tool to drive rapid innovation by end users in business intelligence.  Given the speed of change in today's business, that has to be a good thing.  But, as is the case when any powerful tool is put in the hands of a user, there is a danger of severely burnt fingers!  The BI department must therefore put processes in place to help users know if the information they want is really suitable for mashing up.  In practice, this will require either the creation of extensive metadata to describe the available information sources or the provision of a robust help desk facility to explain to users what's possible and even what went wrong.


Posted January 30, 2011 10:16 AM
Permalink | No Comments |
Just putting NoSQL in the title of a post on B-eye-Network might raise a few hackles ;-) but the growing popularity of the term and vaguely related phrases like big data, Hadoop and distributed file systems brings the topic regularly to the fore these days.  I'm often asked by BI practitioners: what is NoSQL and what can we do about it?

Broadly speaking, NoSQL is a rather loose term that groups together databases (and sometimes non-databases!) that do not use the relational model as a foundation.  And, like anything that is defined by what it's not, NoSQL ends up being on one hand a broad church and on the other a focal point for those who strongly resist the opposite view.  NoSQL is thus claimed by some not to be anti-SQL, and said to stand for "not only SQL".  But, let's avoid this particular minefield and focus on the broad church of data stores that gather together under the NoSQL banner.

David Bessemer, CTO of Composite Software, gives a nice list in his "Data Virtualization and NoSQL Data Stores" article: (1) Tabular/Columnar Data Stores, (2) Document Stores, (3) Graph Databases, (4) Key/Value Stores, (5) Object and Multi-value Databases and (6) Miscellaneous Sources.  He then discusses how (1) and (4), together with XML document stores--a subset of (2)--can be integrated using virtualization tools such as Composite.

There is another school of thought that favors importing such data (particularly textual data) into the data warehouse environment, either by first extracting keywords from it via text analytics or by converting it to XML or other "relational-friendly" formats.  In my view, there is a significant problem with this approach; namely that the volumes of data are so large and their rate of change so fast in many cases, that traditional ETL and Data Warehouse infrastructures will struggle to manage.  The virtualization approach thus makes more sense as the mass access mechanism for such big data.

But, it's also noticeable that Bessemer covers only 2.5 of his 6 classes in detail, saying that they are "particularly suited for the data virtualization platform".  So, what about the others?

In my May 2010 white paper, "Beyond the Data Warehouse: A Unified Information Store for Data and Content", sponsored by Attivio, I addressed this topic in some depth.  BI professionals need to look to what is emerging in the world of content management to see that soft information (also known by the oxymoronic term "unstructured information") is increasingly being analyzed and categorized by content management tools to extract business meaning and value on the fly, without needing to be brought into the data warehouse.  What's needed now is for content management and BI tool vendors to create the mechanism to join these two environments and create a common set of metadata that bridges the two.

This is also a form of virtualization, but the magic resides in the joint metadata.  Depending on your history and preferences, you can see this as an extension of the data warehouse to include soft information or an expansion of content management into relational data.  But, whatever you choose, the key point is to avoid duplicating NoSQL data stores into the data warehouse.

I'll be speaking at O'Reilly Media's big data oriented Strata Conference - Making Data Work - 1-3 February in Santa Clara, California. A keynote, The Heat Death of the Data Warehouse, Thursday, 3 February, 9:25am and an Exec Summit session, The Data-driven Business and Other Lessons from History, Tuesday, 1 February, 9:45am.  O'Reilly Media are offering a 25% discount code for readers, followers, and friends on conference registration:  str11fsd.  


Posted January 21, 2011 9:03 AM
Permalink | No Comments |
David Champagne has recently written a fascinating article for TDWI entitled "The Rise of Data Science" where he reminds us of the scientific method--question, hypothesize, experiment, analyze data, draw conclusions regarding your hypothesis and communicate your results; and an important loop back to rethink the hypothesis if the results don't fully validate it.  I remember it well from my Ph.D. days way back in the late '70s (in physical chemistry, in case you ask).

Champagne goes on to observe the situation today: "...thanks largely to all of the newer tools and techniques available for handling ever-larger sets of data, we often start with the data, build models around the data, run the models, and see what happens. This is less like science and more like panning for gold."  Well said!  But, I'd go a little further.  It can sometimes be more like diving on a sunken Spanish galleon but discovering a dozen giant Moray eels rather than twelve gold doubloons!

A key point, in my view, is that science and business have rather different goals and visions.  Science, in theory, at least, seeks to discover real and eternal truths.  Of course, pride and politics can intrude and cause data to be selectively gathered, suppressed or misinterpreted.  The aim in business is basically to improve the bottom line.  Nothing wrong with that, of course, but organizational and personal aims and concerns often strongly drive the perceived best path to that goal.

Another and, more important, difference is in the data.  Scientific experiments are designed to gather particular data elements of relevance to the hypothesis.  Business data, especially big data, is a mishmash of data gathered for a variety of reasons, without a common purpose or design in mind.  The result is that it is often incomplete and inconsistent, and thus open to wildly varying analyses and interpretations.  Soft sciences like psychology and sociology may face a similar set of problems as experimental data is usually much more intermingled and inconsistent than that from physics experiments, leading to more widely diverging interpretations.

Now, please hear me clearly, there's a lot of great and innovative analysis going on in this field--see Mike Loukides excellent summary, "What is data science?", from six months ago for some examples of this.  But, it is very much like diving on Spanish wrecks; given the right people with enthusiasm, relevant skills and subject matter expertise you can find treasure.  But with the wrong people, you can suffer some terrible injuries.  The question is: how do you move from experimental science to production?  How do you safely scale from the test tube to the 1,000 litre reactor vessel?

Note that this is not a question of scaling data size, processing power or storage.  It is all about scaling the people and process aspects of innovative analysis into regular production.  This is where a data warehouse comes in.  Of course, only a small proportion of the data can (or should) go through the warehouse.  But the value of the warehouse is in the fact that the data it contains has already been reconciled and integrated to an accepted level of consistency and historical accuracy for the organization.  This requires a subtle rethinking of the role of the data warehouse: it is no longer seen as the sole source of all reporting or the single version of the truth.  Rather, it becomes the central set of core business information that ties together disparate analyses and insights from across a much larger information resource.  It can help discover gold rather than Moray eels.

This scaling and move to production remains a difficult and often expensive problem to solve.  In this, I have to disagree with Michael Driscoll, quoted in Champagne's article, who says: "Data management is, increasingly, a solved problem".  I wish it were so...  But the tools and techniques, skills and expertise that organizations have built around their data warehouses and the investments they've made in the technology is key to addressing the deep data management issues that need to be addressed.  It may not be as sexy as statistics has seemingly become, but, in my view, being able to solve the data management problems will be a better indicator of long-term success in this field.

I'll be covering this at O'Reilly Media's first Strata Conference, "Making Data Work",1-3 February in Santa Clara, California. A keynote, "The Heat Death of the Data Warehouse", Thursday, 3 February, 9:25am and an Exec Summit session, "The Data-driven Business and Other Lessons from History", Tuesday, 1 February, 9:45am.  O'Reilly Media are offering a 25% discount code for readers, followers, and friends on conference registration:  str11fsd.

Posted January 12, 2011 8:32 AM
Permalink | No Comments |