Blog: Rick van der Lans Subscribe to this blog's RSS feed!

Rick van der Lans

Welcome to my blog where I will talk about a variety of topics related to data warehousing, business intelligence, application integration, and database technology. Currently my special interests include data virtualization, NoSQL technology, and service-oriented architectures. If there are any topics you'd like me to address, send them to me at rick@r20.nl.

About the author >

Rick is an independent consultant, speaker and author, specializing in data warehousing, business intelligence, database technology and data virtualization. He is managing director and founder of R20/Consultancy. An internationally acclaimed speaker who has lectured worldwide for the last 25 years, he is the chairman of the successful annual European Enterprise Data and Business Intelligence Conference held annually in London. In the summer of 2012 he published his new book Data Virtualization for Business Intelligence Systems. He is also the author of one of the most successful books on SQL, the popular Introduction to SQL, which is available in English, Chinese, Dutch, Italian and German. He has written many white papers for various software vendors. Rick can be contacted by sending an email to rick@r20.nl.

Editor's Note: Rick's blog and more articles can be accessed through his BeyeNETWORK Expert Channel.

October 2012 Archives

Let's change the word big (in big data) to an acronym, so that BIG data stands for Business Intelligence Generated data. The reason for this proposal is that many are struggling with the term big data, myself included. There is a lot of confusion, because there is no generally accepted definition. We all know it's about large quantities of data, high velocity data, and/or a wide variety of data. But then still, what is a large quantity? When is it high and when low velocity? For some, big data is highly structured sensor data (machine generated data), for others it's textual unstructured data coming from social media, and there are those who say it's semi-structured data stored in, for example, weblogs.

The fact that the word big is a relative quantity doesn't help either. What big is for a midsize European company, can be medium for a large US company. And is it really about the amount of the data? Or is it more about what we do with it, for example, we analyze that data (regardless of the quantity). The V's (Volume, Velocity, Variety, Variation, Visibility, and Value--I've lost count of how many V's there are) are mentioned regularly to describe when something qualifies as big data.

Some have presented definitions, but I haven't seen an acceptable one yet. One author used the following definition: big data is data that is too much for a SQL database. This makes no sense. For example, there are plenty of multi-terabyte systems that everyone would classify as big data systems and that can be handled by SQL products more than satisfactorily.

Lastly, enough data is enough data. The quality of an analytical result doesn't always increase when the amount of data increases. Data quality is often more important than data quantity.

Conclusion, confusion rules when it relates to the concept of  big data.

In this blog I look at big data systems from a different angle in the hope that this helps to clarify this muddled concept.

Undeniably, processing large quantities of data is a common characteristic of most big data systems, but there is another one, and that is that most of such systems combine characteristics of production systems and of BI systems. In a sense each big data system is a production system, because it collects and stores new data, plus it's a BI system, because this new data is not collected to support business processes, but the primary intention is to use it for some form of analytics, possibly embedded analytics (analytics embedded within production systems), operational analytics, or predictive analytics. With new data I mean data that is not collected and stored by the organization yet, and in many cases it's also a new type of data. For example, a big data system developed by a retail company may be gathering camera data for tracking customer routes through their stores. Or, a big data system of a large international electronics firm may collect unstructured social media data for sentiment analysis.

Traditionally, new data is entered with and processed by production systems, such as a general ledger, cash management, and claim processing systems. These systems are, however, not designed to support analytics, but are designed to support business processes. In fact, when they were designed, the focus was definitely not on analytics, but on supporting data entry. This is why it's sometimes so hard when developing BI systems to extract the right data from those production databases for analytical and reporting purposes--staging areas have to be developed, ETL and replication processes have to be designed, and so on. This is still true today: the designers of new production systems don't think about how the organization can use the data for analytical purposes.

In other words, what makes big data systems special is that they are hybrid systems, they are production systems and BI systems. In my opinion, this is what makes big data applications special--and, evidently, most of them collect massive amounts of data to supports the required forms of analytics.

So, maybe we should redefine the term big data. Let's begin by not associating the word big with a relative quantity anymore, but let's change the word big to an acronym, so that BIG data stands for Business Intelligence Generated data--data generated and stored with the primary purpose to analyze it. Thus, a big data system is a system that generates, collects, stores, and processes data specifically to support business intelligence. Subsequently, big data is data managed by a big data system.

Hopefully, by redefining the term big data it becomes more obvious what is meant with this promising category of systems and gets rid of some of the confusion.


Posted October 16, 2012 1:45 PM
Permalink | 2 Comments |
In a series of blogs, I am answering some of the questions a large US-based, health care organization had on data virtualization. I decided to share some of their questions with you, because some of them represent issues that many organizations struggle with.

Their question: "Isn't data virtualization by definition slow, because it's an extra layer of software? And doesn't all the federation, integration, transformation, and cleansing of data that has to take place on-demand, slow down each query?"

This is a question I can't disregard in this series, because performance is an aspect that always worries people when they hear about data virtualization for the first time. In addition, I received a comment on the first blog in this series, which was related to performance.

There is much to say about the performance of data virtualization servers, but because this is a blog, I focus on the key issues.

First of all, some think that the performance of a data virtualization server is by definition poor, because it's accessing source production systems, and not a data warehouse, a data mart, or some other database that is designed and optimized for reporting. It's true that retrieving data from source systems can lead to performance problems. These systems may not have been designed or optimized to run BI queries, or the transaction workload they have to process is so intense that running queries on them as well, can cause serious interference. Therefore, in most cases this is not the recommended approach. A better approach is to design a data warehouse and let the data virtualization server access that data warehouse and not the production systems. Data virtualization does not exclude a data warehouse; also see my blog Do We Still Need A Data Warehouse?

Second, because data virtualization evolved from data federation, some think that data virtualization is only worthwhile when data from multiple data sources is retrieved and integrated--only useful when data is federated. Because data federation can be a resource hungry operation, it can therefore be slow. Evidently, all data virtualization servers do support data federation, and they have various techniques to optimize this federation process. But data virtualization servers are not only useful when data federation is needed. In many systems, data virtualization is used even when each query is a non-federated query. In this case, the strength of data virtualization is encapsulation (hiding all irrelevant technical details of the data stores) and abstraction (showing only relevant data, with the right structure, and on the right aggregation level).

Third, to make access to data sources as efficient as possible, many optimization techniques are implemented in data virtualization servers. For example, join optimization techniques, such as ship joins, query substitution, query pushdown, and query expansion, are implemented to make data access as efficient as possible. These techniques are all very mature and have proven their worth. In fact, research has been going on in this area since the days of the famous IBM's System R* and Ingres projects. Both projects started way back in the 1970s. And research continues--new techniques are still discovered to optimize data access.

Another example of a technique that improves performance is caching. With this technique, the contents of virtual tables (the key building blocks of data virtualization servers) are stored. This means that when a virtual table is accessed, the result is not retrieved from the underlying data sources, but from a cache. The effect is that access of the data sources is not required, data transformation doesn't have to take place, and no data has be integrated or cleansed. No, the data is ready to go. It's like picking up a pizza from a take-away restaurant where you phoned-in your order 30 minutes on forehand.

More and more data virtualization servers offer all kinds of features to store and access caches efficiently. For example, some allow the cache to be stored in the fastest analytical database servers available. It's to be expected that in the near future, data virtualization servers will also support in-memory database servers for storing caches. Undeniably, this will speed up query processing even more, because accessing those cache will involve no I/O.

To summarize, some have worries about the performance of data virtualization servers, but quite regularly those worries are based on the wrong assumptions, such as data virtualization excludes a data warehouse. Data virtualization servers offer enough optimization techniques to process the majority of today's queries fast. However, if you want to run queries that join historical data stored in a data warehouse with data coming from a sentiment analysis executed on textual information straight from Twitter, and join that with production data that still has to be cleansed heavily, then yes, you will have a performance challenge.

Note: For more information on data virtualization, such as query optimization and caching, I refer to my new book "Data Virtualization for Business Intelligence Systems" available from Amazon.


Posted October 15, 2012 1:24 AM
Permalink | No Comments |