We use cookies and other similar technologies (Cookies) to enhance your experience and to provide you with relevant content and ads. By using our website, you are agreeing to the use of Cookies. You can change your settings at any time. Cookie Policy.

Blog: Barry Devlin Subscribe to this blog's RSS feed!

Barry Devlin

As one of the founders of data warehousing back in the mid-1980s, a question I increasingly ask myself over 25 years later is: Are our prior architectural and design decisions still relevant in the light of today's business needs and technological advances? I'll pose this and related questions in this blog as I see industry announcements and changes in way businesses make decisions. I'd love to hear your answers and, indeed, questions in the same vein.

About the author >

Dr. Barry Devlin is among the foremost authorities in the world on business insight and data warehousing. He was responsible for the definition of IBM's data warehouse architecture in the mid '80s and authored the first paper on the topic in the IBM Systems Journal in 1988. He is a widely respected consultant and lecturer on this and related topics, and author of the comprehensive book Data Warehouse: From Architecture to Implementation.

Barry's interest today covers the wider field of a fully integrated business, covering informational, operational and collaborative environments and, in particular, how to present the end user with an holistic experience of the business through IT. These aims, and a growing conviction that the original data warehouse architecture struggles to meet modern business needs for near real-time business intelligence (BI) and support for big data, drove Barry’s latest book, Business unIntelligence: Insight and Innovation Beyond Analytics, now available in print and eBook editions.

Barry has worked in the IT industry for more than 30 years, mainly as a Distinguished Engineer for IBM in Dublin, Ireland. He is now founder and principal of 9sight Consulting, specializing in the human, organizational and IT implications and design of deep business insight solutions.

Editor's Note: Find more articles and resources in Barry's BeyeNETWORK Expert Channel and blog. Be sure to visit today!

Recently in Data warehouse Category

Having keynoted, spoken at and attended the inaugural O'Reilly Media Strata Conference in Santa Clara over the past few days, I wanted to share a few observations.

With over 1,200 attendees, the buzz was palpable.  This was one of the most energized data conferences I've attended in at least a decade.  Whether it was the tag line "Making Data Work", the fact it was an O'Reilly event or something else, it was clear that the conference captured the interest of the data community. 

The topics on the agenda were strongly oriented towards data science, "big data" and the softer (aka less structured) types of information.  This led me to expect that I'd be an almost lone voice for traditional data warehousing topics and thoughts.  I was wrong.  While there certainly were lots of experts in data analysis and Hadoop, there was no shortage of both speakers and attendees who did understand many of the principles of cleansing, consistency and control at the heart of data warehousing.

Given the agenda, I was also expecting to be somewhat of the "elder lemon" of the conference.  Unfortunately (in my personal view), in this I was correct.  It looked to me that the median age was well south of thirty, although I've done no data analysis to validate that impression.  Another observation, which was a bit more concerning, was that the gender balance of the audience was about the same as I've seen at data warehouse conferences since the mid-90s: about the same mid-90s percentage of males.  It seems that data remains largely a masculine topic.

The sponsor / vendor exhibitor list was also very interesting.  There were only a few of those that turn up at traditional data warehouse conferences.  Of course, the new "big data" vendors were there in force, as well as a few information providers.  Of the relational database vendors, only ParAccel and AsterData were represented.  Jaspersoft and Pentaho represented the Open Source BI vendors. While Pervasive and Tableau rounded out the vendors I recognized from the BI space.

As a final point, I note that the next Strata Conference has already been announced: 19-21 September in New York.  Wish I could be there!

Posted February 3, 2011 7:02 PM
Permalink | No Comments |
Just putting NoSQL in the title of a post on B-eye-Network might raise a few hackles ;-) but the growing popularity of the term and vaguely related phrases like big data, Hadoop and distributed file systems brings the topic regularly to the fore these days.  I'm often asked by BI practitioners: what is NoSQL and what can we do about it?

Broadly speaking, NoSQL is a rather loose term that groups together databases (and sometimes non-databases!) that do not use the relational model as a foundation.  And, like anything that is defined by what it's not, NoSQL ends up being on one hand a broad church and on the other a focal point for those who strongly resist the opposite view.  NoSQL is thus claimed by some not to be anti-SQL, and said to stand for "not only SQL".  But, let's avoid this particular minefield and focus on the broad church of data stores that gather together under the NoSQL banner.

David Bessemer, CTO of Composite Software, gives a nice list in his "Data Virtualization and NoSQL Data Stores" article: (1) Tabular/Columnar Data Stores, (2) Document Stores, (3) Graph Databases, (4) Key/Value Stores, (5) Object and Multi-value Databases and (6) Miscellaneous Sources.  He then discusses how (1) and (4), together with XML document stores--a subset of (2)--can be integrated using virtualization tools such as Composite.

There is another school of thought that favors importing such data (particularly textual data) into the data warehouse environment, either by first extracting keywords from it via text analytics or by converting it to XML or other "relational-friendly" formats.  In my view, there is a significant problem with this approach; namely that the volumes of data are so large and their rate of change so fast in many cases, that traditional ETL and Data Warehouse infrastructures will struggle to manage.  The virtualization approach thus makes more sense as the mass access mechanism for such big data.

But, it's also noticeable that Bessemer covers only 2.5 of his 6 classes in detail, saying that they are "particularly suited for the data virtualization platform".  So, what about the others?

In my May 2010 white paper, "Beyond the Data Warehouse: A Unified Information Store for Data and Content", sponsored by Attivio, I addressed this topic in some depth.  BI professionals need to look to what is emerging in the world of content management to see that soft information (also known by the oxymoronic term "unstructured information") is increasingly being analyzed and categorized by content management tools to extract business meaning and value on the fly, without needing to be brought into the data warehouse.  What's needed now is for content management and BI tool vendors to create the mechanism to join these two environments and create a common set of metadata that bridges the two.

This is also a form of virtualization, but the magic resides in the joint metadata.  Depending on your history and preferences, you can see this as an extension of the data warehouse to include soft information or an expansion of content management into relational data.  But, whatever you choose, the key point is to avoid duplicating NoSQL data stores into the data warehouse.

I'll be speaking at O'Reilly Media's big data oriented Strata Conference - Making Data Work - 1-3 February in Santa Clara, California. A keynote, The Heat Death of the Data Warehouse, Thursday, 3 February, 9:25am and an Exec Summit session, The Data-driven Business and Other Lessons from History, Tuesday, 1 February, 9:45am.  O'Reilly Media are offering a 25% discount code for readers, followers, and friends on conference registration:  str11fsd.  

Posted January 21, 2011 9:03 AM
Permalink | No Comments |
As mentioned in my last post, ParAccel had a really interesting announcement coming out this week.  I was talking about their partnering with Fusion-io to attach SSD technology in their Paraccel Analytic Appliance for even faster query performance.  ParAccel are not alone in their use of SSD; Teradata's 4555 and Oracle's Exadata 2 also include the technology.  For me, it's not even about faster query results for users.  It's about the implications for the entire Data Warehouse architecture.

Over the past couple of years, we've seen dramatic improvements in database performance due to hardware and software advances such as in-memory databases, columnar storage, massively parallel processing, compression, and so on as described in my white paper from April 2009.  SSD, in one sense, is just another piece of accelerating technology.  However, add it to the existing list, and you begin to see the possibility of revisiting old assumptions about what is possible within a single database.  Here are a few ideas to play with:

  • Do you still need that Data Mart?  With so much faster performance, maybe the queries you now run in the Mart could run directly on the EDW.  Reducing data duplication has enormous benefits, on storage volumes, but principally in reducing maintenance of ETL to the Marts.
  • Where to do operational BI?  It was once considered necessary to install a separate ODS to support closer to real-time access to consolidated atomic data.  But with such a fast database, couldn't you just trickle feed the data and do it all in the Warehouse itself.  One less copy of data and one less set of ETL can't be all bad!
  • ETL or ELT?  Extract, transform and load has been the traditional way of loading a Warehouse, with a special engine to do the transform step.  Well, with a faster and more powerful database engine, you have the option to try extract, load and transform and let the Warehouse database do the transform work.
Although ParAccel, like all the smaller vendors are focusing more on selling to the "bigger, faster, more complex analytics applications" market at present, I'm pretty sure that the work ParAccel is doing under the covers on query optimization, workload management, loading and updating features will pave the way for a sea change in how we do data warehousing in the next few years.

Posted February 17, 2010 2:34 PM
Permalink | No Comments |
Kim Stanick and Rick Glick of ParAccel were at the Boulder BI Brain Trust (BBBT) last Friday. They have an exciting announcement coming soon, and much of what was discussed was under NDA, so I can't give details here. But about half-way through their presentation, they threw up a slide saying simply "EDW: What's not working?"

Well, that's a negative question! And, anyway, I believe most of us have some good ideas about what's not working--from project scoping and delivery issues to problems of complexity of feeds and bottlenecks in timely data availability. So, let me re-frame the question: "Where next for EDW?"

I wrote a BI Thought Leader for ParAccel last April called "Analytic Databases in the World of the Data Warehouse" that began to address that question, and as the world of BI has evolved since, I want to revisit that question briefly. Back then, I wrote:

"Specialized analytic databases using [advanced] technologies ... now offer significantly improved performance for typical BI applications, enable previously impossible analyses and often lower cost implementation. They also have the potential to challenge the current physically layered Data Warehouse architecture. This paper ... argues that analytical databases may enable a move to a simpler non-layered architecture with significant benefits in terms of lower costs of implementation, maintenance, and use."

In brief, it's our old friend, the paradigm shift, enabled by a dramatic shift in the price-performance characteristics of data warehouses driven by a new generation of technology. The possibility I saw then was a return to a physically simpler, more singular implementation of the EDW. And indeed that may still be a first step.

My thinking has evolved further since then, and I'm really beginning to envisage a much larger problem space that we need to address--how to integrate the entire enterprise information set, operational, informational and collaborative. I call that Business Integrated Insight (BI2), described in a more recent white paper. The discussion at BBBT last Friday led by a number of physical database technology experts gave rise to some new insights into how BI2 could be physically instantiated.

Virtualization at every level of the environment--servers, applications, data and particularly databases--linked closely with advances in the technology (as opposed to the hype) of cloud computing is widely discussed today as a way to reduce IT capital and operating costs, consolidate infrastructure, simplify resource management and so on. However, database virtualization offers new possibilities in the physical implementation of an enterprise data architecture that spans all data types and processing needs. Chief among these are flexibility of implementation, adaptability, mediated access to and use of data across multiple database types, significant reductions in data duplication and the gradual construction of overarching models that describe the entire business information resource. I'm sure there's much more to be said on this topic, but I'd love to hear the views of some experts in the field.

Posted February 9, 2010 6:52 AM
Permalink | No Comments |

1 2 NEXT


Search this blog
Categories ›
Archives ›
Recent Entries ›