We use cookies and other similar technologies (Cookies) to enhance your experience and to provide you with relevant content and ads. By using our website, you are agreeing to the use of Cookies. You can change your settings at any time. Cookie Policy.

Blog: Barry Devlin Subscribe to this blog's RSS feed!

Barry Devlin

As one of the founders of data warehousing back in the mid-1980s, a question I increasingly ask myself over 25 years later is: Are our prior architectural and design decisions still relevant in the light of today's business needs and technological advances? I'll pose this and related questions in this blog as I see industry announcements and changes in way businesses make decisions. I'd love to hear your answers and, indeed, questions in the same vein.

About the author >

Dr. Barry Devlin is among the foremost authorities in the world on business insight and data warehousing. He was responsible for the definition of IBM's data warehouse architecture in the mid '80s and authored the first paper on the topic in the IBM Systems Journal in 1988. He is a widely respected consultant and lecturer on this and related topics, and author of the comprehensive book Data Warehouse: From Architecture to Implementation.

Barry's interest today covers the wider field of a fully integrated business, covering informational, operational and collaborative environments and, in particular, how to present the end user with an holistic experience of the business through IT. These aims, and a growing conviction that the original data warehouse architecture struggles to meet modern business needs for near real-time business intelligence (BI) and support for big data, drove Barry’s latest book, Business unIntelligence: Insight and Innovation Beyond Analytics, now available in print and eBook editions.

Barry has worked in the IT industry for more than 30 years, mainly as a Distinguished Engineer for IBM in Dublin, Ireland. He is now founder and principal of 9sight Consulting, specializing in the human, organizational and IT implications and design of deep business insight solutions.

Editor's Note: Find more articles and resources in Barry's BeyeNETWORK Expert Channel and blog. Be sure to visit today!

David Champagne has recently written a fascinating article for TDWI entitled "The Rise of Data Science" where he reminds us of the scientific method--question, hypothesize, experiment, analyze data, draw conclusions regarding your hypothesis and communicate your results; and an important loop back to rethink the hypothesis if the results don't fully validate it.  I remember it well from my Ph.D. days way back in the late '70s (in physical chemistry, in case you ask).

Champagne goes on to observe the situation today: "...thanks largely to all of the newer tools and techniques available for handling ever-larger sets of data, we often start with the data, build models around the data, run the models, and see what happens. This is less like science and more like panning for gold."  Well said!  But, I'd go a little further.  It can sometimes be more like diving on a sunken Spanish galleon but discovering a dozen giant Moray eels rather than twelve gold doubloons!

A key point, in my view, is that science and business have rather different goals and visions.  Science, in theory, at least, seeks to discover real and eternal truths.  Of course, pride and politics can intrude and cause data to be selectively gathered, suppressed or misinterpreted.  The aim in business is basically to improve the bottom line.  Nothing wrong with that, of course, but organizational and personal aims and concerns often strongly drive the perceived best path to that goal.

Another and, more important, difference is in the data.  Scientific experiments are designed to gather particular data elements of relevance to the hypothesis.  Business data, especially big data, is a mishmash of data gathered for a variety of reasons, without a common purpose or design in mind.  The result is that it is often incomplete and inconsistent, and thus open to wildly varying analyses and interpretations.  Soft sciences like psychology and sociology may face a similar set of problems as experimental data is usually much more intermingled and inconsistent than that from physics experiments, leading to more widely diverging interpretations.

Now, please hear me clearly, there's a lot of great and innovative analysis going on in this field--see Mike Loukides excellent summary, "What is data science?", from six months ago for some examples of this.  But, it is very much like diving on Spanish wrecks; given the right people with enthusiasm, relevant skills and subject matter expertise you can find treasure.  But with the wrong people, you can suffer some terrible injuries.  The question is: how do you move from experimental science to production?  How do you safely scale from the test tube to the 1,000 litre reactor vessel?

Note that this is not a question of scaling data size, processing power or storage.  It is all about scaling the people and process aspects of innovative analysis into regular production.  This is where a data warehouse comes in.  Of course, only a small proportion of the data can (or should) go through the warehouse.  But the value of the warehouse is in the fact that the data it contains has already been reconciled and integrated to an accepted level of consistency and historical accuracy for the organization.  This requires a subtle rethinking of the role of the data warehouse: it is no longer seen as the sole source of all reporting or the single version of the truth.  Rather, it becomes the central set of core business information that ties together disparate analyses and insights from across a much larger information resource.  It can help discover gold rather than Moray eels.

This scaling and move to production remains a difficult and often expensive problem to solve.  In this, I have to disagree with Michael Driscoll, quoted in Champagne's article, who says: "Data management is, increasingly, a solved problem".  I wish it were so...  But the tools and techniques, skills and expertise that organizations have built around their data warehouses and the investments they've made in the technology is key to addressing the deep data management issues that need to be addressed.  It may not be as sexy as statistics has seemingly become, but, in my view, being able to solve the data management problems will be a better indicator of long-term success in this field.

I'll be covering this at O'Reilly Media's first Strata Conference, "Making Data Work",1-3 February in Santa Clara, California. A keynote, "The Heat Death of the Data Warehouse", Thursday, 3 February, 9:25am and an Exec Summit session, "The Data-driven Business and Other Lessons from History", Tuesday, 1 February, 9:45am.  O'Reilly Media are offering a 25% discount code for readers, followers, and friends on conference registration:  str11fsd.

Posted January 12, 2011 8:32 AM
Permalink | No Comments |

Leave a comment