We use cookies and other similar technologies (Cookies) to enhance your experience and to provide you with relevant content and ads. By using our website, you are agreeing to the use of Cookies. You can change your settings at any time. Cookie Policy.

Blog: Barry Devlin Subscribe to this blog's RSS feed!

Barry Devlin

As one of the founders of data warehousing back in the mid-1980s, a question I increasingly ask myself over 25 years later is: Are our prior architectural and design decisions still relevant in the light of today's business needs and technological advances? I'll pose this and related questions in this blog as I see industry announcements and changes in way businesses make decisions. I'd love to hear your answers and, indeed, questions in the same vein.

About the author >

Dr. Barry Devlin is among the foremost authorities in the world on business insight and data warehousing. He was responsible for the definition of IBM's data warehouse architecture in the mid '80s and authored the first paper on the topic in the IBM Systems Journal in 1988. He is a widely respected consultant and lecturer on this and related topics, and author of the comprehensive book Data Warehouse: From Architecture to Implementation.

Barry's interest today covers the wider field of a fully integrated business, covering informational, operational and collaborative environments and, in particular, how to present the end user with an holistic experience of the business through IT. These aims, and a growing conviction that the original data warehouse architecture struggles to meet modern business needs for near real-time business intelligence (BI) and support for big data, drove Barry’s latest book, Business unIntelligence: Insight and Innovation Beyond Analytics, now available in print and eBook editions.

Barry has worked in the IT industry for more than 30 years, mainly as a Distinguished Engineer for IBM in Dublin, Ireland. He is now founder and principal of 9sight Consulting, specializing in the human, organizational and IT implications and design of deep business insight solutions.

Editor's Note: Find more articles and resources in Barry's BeyeNETWORK Expert Channel and blog. Be sure to visit today!

July 2010 Archives

I was speaking to Susan Davis and Bob Zurek of Infobright the other day, and one statement that caught my attention was that they try to go to the actual data as little as possible.  An interesting objective for a product that's positioned as a "high performance database for analytic applications and data marts", don't you think?

It sounds somewhat counter-intuitive until you realize that in a world of exploding data volumes that need to be analyzed, you have only two choices if you want to maintain a reasonable response time for users: (1) throw lots of hardware at the problem--parallel processing, faster storage, and more--or (2) be a lot cleverer in what you access and when.  The first approach is pretty common and based on recent developments, quite successful.  And as we move into solid-state disks (SSD) and in-memory databases, we'll see even more gains.  But, let's play with the second option a bit.

How can we minimize access (disk I/O) to the actual data?  So, we can say immediately that the minimum number of times we have to touch the actual data is once!  In the case of a data warehouse or mart, that is when we load it.  In a traditional row-based RDBMS, that's when we build an indexes we need to speed access for particular queries or further processes.  With column-based databases, we often hear that indexes are no longer needed or much reduced--reducing database size, load time and ongoing maintenance costs.  And it's certainly true that columnar databases improve query response time.  And yet, we might ask (and it applies in the case of row-based databases as well) is there anything else we could do on that single and mandatory access to all the data that could help reduce later data access during analysis?

Infobright's solution is the Knowledge Grid, a set of metadata based on Rough Set theory generated at load-time and used to limit the range of actual data a query has to retrieve in order to figure out which values match the query conditions.  Each 64K items block of data (Data Pack) on disk has a set of metadata such as maximum and minimum values, sum, count, etc. for numerical items calculated for it at load-time.  At query run-time, these statistics inform the database engine that some data packs are irrelevant because no item meets the query conditions.  Other data packs contain only data that meets the query conditions, and if the statistics contain the result needed by the query, the data here need not be accessed either.  The remainder of the data packs contain some data that matches the query and will have to be accessed.  Given the right statistics, the amount of disk I/O can be significantly reduced.  Infobright also create metadata for character items at load-time and for joins at query-time.

Generalizing from the above, we can begin to imagine other possibilities.  What if you didn't load the actual data into the database, but just left it where it was and crawled through it to create metadata of a similar nature to allow irrelevant data for a particular query to be eliminated en masse?  Of course, that sounds a bit like the indexing approach used by search engines and extended by Attivio and others to cover relational databases as well.  Of course, the problem with indexes and similar metadata is that they tend to grow in volume also, until they reach a significant percentage of the actual data size; then we're back to square one.

My mathematical skills are far too rusty (if they were ever bright and shiny enough in the first place) to know if Rough Set theory has anything to say about that issue or how it could be applied beyond the way that Infobright have implemented it, but it does seem like a interesting area for exploration as data volume continue to explode.  Any bright PhDs out there like to give it a try?

Posted July 29, 2010 2:03 PM
Permalink | No Comments |
Synchronicity is a wonderful thing! I get yet another follower notice from Twitter today, and for the first time in ages I am curious enough to check the profile. It turns out that @LaurelEarhart is marketing director for the Smart Content Conference, among other things, including Biz Dev Maven! And there, I read "Perfect storm: #Google acquired #Metaweb" announced on July 16. Having just done a webinar with Attivio yesterday on the topic "Beyond the Data Warehouse: A Unified Information Store for Data and Content" my interest was piqued. Let me tell you why.

I suspect that very few data warehouse vendors or developers have paid much attention to Metaweb or its acquisition. As far as I can tell, it hasn't turned up on the data warehouse or BI analyst blogs either. Perhaps the reason is that Metaweb's business is in providing a semantic data storage infrastructure for the web, and Freebase, an "open, shared database of the world's knowledge". For data warehouse geeks, the former is probably a bit off-message, while the latter may sound like Wikipedia, although the mention of a shared database may raise the interest level slightly.

But, if you're thinking about what lies beyond data warehousing (as I am), and wondering how on earth we're ever going to truly integrate relevant content with the data in our warehouses, what Metaweb and now Google are doing should be of some interest. Here's a quote from Jack Menzel, director of product management at Google on his blog:

"Type [barack obama birthday] in the search box and see the answer right at the top of the page. Or search for [events in San Jose] and see a list of specific events and dates. We can offer this kind of experience because we understand facts about real people and real events out in the world. But what about [colleges on the west coast with tuition under $30,000] or [actors over 40 who have won at least one oscar]? These are hard questions, and we've acquired Metaweb because we believe working together we'll be able to provide better answers."

For me, the interesting point here is the inclusion in the hard questions of conditions that would make sense to even the most inexperienced BI user. Take either of these two hard questions and you can easily imagine the SQL statements required, provided you defined and populated the right columns in your tables. The problem is that you need to have predefined columns and the tables in advance of somebody asking the questions.

What Metaweb on the Internet and Attivio on the intranet (and, of course, other vendors in both areas) are trying to do is to bridge the gap between data and content, so that users can ask mixed search and BI queries based on the implicit understanding that exists in the data/content stores of the semantics of the information. And, perhaps more importantly, to be able to do that in a fully ad hoc manner that doesn't require prior definition of a data model and its instantiation in columns and tables of a relational database. If you want to dig deeper, I invite you to take a look at my recent white paper.

In the meantime, my thanks to @LaurelEarhart and the wonder of synchronicity.

Posted July 22, 2010 3:39 AM
Permalink | No Comments |
Any acquisition in the database market, in this case, the July 6 announcement of EMC's plan to acquire Greenplum, generates a flurry of analyst activity speculating about the financial or technical rationale for the acquisition, winners and losers among other database vendors and the effect of the move on customers' buying patterns.  Personally, I find these opinions very interesting and highly informative.  And I invite you to check out, for example, Curt Monash or Merv Adrian to explore these aspects of the acquisition.

However, I'd like to take the opportunity to focus our minds once again on a more fundamental question: how is IT going to manage data quality and reliability in a rapidly expanding data environment, both in terms of data volumes and places to store the data?  I'm currently describing a logical enterprise architecture, Business Integrated Insight (BI2), that focuses on this.

So, for me, what the acquisition emphasizes, like that of Sybase by SAP, is that specialized databases, with their sophisticated features and functions, are rapidly entering the mainstream of database usage.  Their ability to handle large data volumes with vast improvements in query performance has become increasingly valuable in a wide range of industries that want to analyze enormous quantities of very detailed data at relatively low cost.  How to do this?  Vendors of these systems typically have a simple answer: copy all the required data into our machine and away you go!

My concern is that IT ends up with yet another copy of the corporate data, and a very large copy at that, that must be kept current in meaning, structure and content on an ongoing basis.  Any slippage in maintaining one or more of these characteristics leads inevitably to data quality problems and eventually to erroneous decisions.  Such issues typically emerge unexpectedly, in time-constrained or high-risk situations and lead to expensive and highly visible firefighting actions by IT.  Unfortunately, such occurrences are common in BI environments, but typically relate to unmanaged spreadsheets or relatively small data marts.  We have just jumped the problem size up by a couple of orders of magnitude.

So, am I suggesting that you shouldn't be using these specialized databases?  Would I recommend that you stand in front of a speeding freight train?  Clearly not!

There are two ways that these problems will be addressed.  One falls upon customer IT departments, while the other comes back to the database industry and the vendors, whether acquiring or acquired.  These paths will need to be followed in parallel.

IT departments need to define and adopt stringent "data copy minimization" policies.  The purist in me would like to say "elimination" rather than "minimization".  However, that's clearly impossible.  Minimization of data copies, in the real world, requires IT to evaluate the risks of yet another copy of data, the possibility of using an existing set of data for the new requirement and, if a new copy of the data is absolutely needed, whether existing analytic solutions could be migrated to this new copy of data and the existing data copies eliminated.

Meanwhile, it is incumbent upon the database industry to take a step back and look at the broader picture of data management needs in the context of emerging technologies and the explosive growth in data volumes.  The basic question that needs to be asked is: how can the enormous power and speed of these emerging technologies be crafted into solutions that equally support divergent data use cases on a single copy of data?  And, if not on a single copy, how can multiple copies of data be managed to complete consistency invisibly within the database technology?

Tough questions, perhaps, but ones that the acquirers in this industry, with their deep pockets, need to invest in.  As the database market re-converges, the vendors that solve this architectural conundrum will become the market leaders in highly consistent, pervasive and minimally duplicated data that enables IT to focus on solving real business needs rather than managing data quality.  Wouldn't that be wonderful?

Posted July 7, 2010 1:18 PM
Permalink | No Comments |