We use cookies and other similar technologies (Cookies) to enhance your experience and to provide you with relevant content and ads. By using our website, you are agreeing to the use of Cookies. You can change your settings at any time. Cookie Policy.


Blog: Rick van der Lans Subscribe to this blog's RSS feed!

Rick van der Lans

Welcome to my blog where I will talk about a variety of topics related to data warehousing, business intelligence, application integration, and database technology. Currently my special interests include data virtualization, NoSQL technology, and service-oriented architectures. If there are any topics you'd like me to address, send them to me at rick@r20.nl.

About the author >

Rick is an independent consultant, speaker and author, specializing in data warehousing, business intelligence, database technology and data virtualization. He is managing director and founder of R20/Consultancy. An internationally acclaimed speaker who has lectured worldwide for the last 25 years, he is the chairman of the successful annual European Enterprise Data and Business Intelligence Conference held annually in London. In the summer of 2012 he published his new book Data Virtualization for Business Intelligence Systems. He is also the author of one of the most successful books on SQL, the popular Introduction to SQL, which is available in English, Chinese, Dutch, Italian and German. He has written many white papers for various software vendors. Rick can be contacted by sending an email to rick@r20.nl.

Editor's Note: Rick's blog and more articles can be accessed through his BeyeNETWORK Expert Channel.

In a series of blogs I am answering some of the questions a large US-based, health care organization had on data virtualization. I decided to share some of their questions with you, because some of them represent issues that many organizations struggle with.

For those not familiar with the concept or name, a Rube Goldberg machine, contraption, invention, device, or apparatus is a deliberately over-engineered or overdone machine that performs a very simple task in a very complex fashion, usually including a chain reaction--this is the definition used by Wikipedia. Examples of very simple tasks are pouring beer in a glass, opening up a door, or switching on a TV. Rube Goldberg was an American cartoonist and was most popular for drawing weird machines. Here you can find a photo showing an example of a Rube Goldberg machine.  On YouTube you can find numerous films showing such machines at work. One you have to see is the one developed by a young kid called Audri.

Why a discussion on these weird and often useless machines? Quite recently, I received an email from my customer with the following remark: "I have been thinking about the complexities of physical integration of our systems [with physical integration he means using classic ETL and duplicating data in several databases]. I wish I had Audri's YouTube video when I was trying to urge my team to consider data virtualization. After seeing that little boy, it feels as if developing and testing a system based on physical integration, is like trying to develop and test a Rube Goldberg machine."

He continues with "Knowing what I know now [after studying data virtualization more seriously], if data virtualization is comparable to using a remote control to turn a TV on and off, then physical integration is comparable to developing and using a Rube Goldberg machine to turn a TV on and off."

Evidently, this is an exaggeration, because there are still various situations for which you have to or want to deply a form of physical integration. But there is some truth in it. Software and hardware are currently so much more powerful than ten years ago. In fact, there is so much more "power" available that if organizations would have to design their current BI systems from scratch, they would probably come up with much simpler architectures, ones in which agility would be a fundamental design factor. Data virtualization would be one of the technologies that would clearly help to develop more agile BI systems.

So, years ago, when we designed the architectures of our BI systems, they were not considered Rube Goldberg machines. They were necessities, there was no other choice. But today there is. So, if we look at these architectures today, they do resemble Rube Goldberg machines. They are like machines in which the data values roll down a spiral, are thrown from one database to another, are changed occasionally, fall of some track sporadically, and sometimes even float a few inches, before they arrive in a report.

I have decided to use Audri's film from now on to explain what the differences are between developing BI systems with and without data virtualization.

Note: If you have questions related to data virtualization, send them in. I am more than happy to answer them.
 

Posted November 19, 2012 11:15 AM
Permalink | 1 Comment |
In this series of blogs I'm answering questions on data virtualization coming from a particular organization. Even though the question "Is data virtualization immature technology?" is not coming from them, I decided to include it in this series, because it's being asked so frequently.

Probably because the term data virtualization is relatively young, some think it's also young technology and thus immature and only usable in small environments. This is a misunderstanding. Therefore, I decided to give a feeling of the long and rich history of data virtualization, making use of extracts from my book "Data Virtualization for Business Intelligence Systems."

Fundamental to data virtualization are the concepts abstraction and encapsulation. These concepts have their origin in the early 1970s. Exactly forty years ago, in 1972, David L. Parnas wrote a groundbreaking article "On the Criteria to be Used in Decomposing Systems into Modules." In this to me legendary article, Parnas explains how important it is that applications are developed in such a way that they become independent of the structure of the stored data. The big advantage of this concept is that if one changes, the other may not have to change. In addition, by hiding technical details, applications become easier to maintain, or to use more modern terms, they become more agile. Parnas calls this information hiding and worded it as follows: "... the purpose of [information] hiding is to make inaccessible certain details that should not affect other parts of a system."

Information hiding eventually became the basis for popular concepts, such as, object-orientation, component based development, and more currently service oriented architectures. All three have encapsulation and abstraction as foundation. No one questions the values of those three concepts anymore.

But Parnas was not the only one who saw the value of encapsulation and abstraction. The most influential paper in the history of data management "A Relational Model of Data for Large Shared Data Banks", written by E.F. Codd, founder of the relational model, started as follows: "Future users of large data banks must be protected from having to know how the data is organized [...] application programs should remain unaffected when the internal representation of data is changed and even when some aspects of the external representation are changed." He used different terms as Parnas, but he clearly had the same vision. This illustrates one fundamental principle of computer science which is at the root of data virtualization: applications should be independent of the complexities of accessing data.

At the end of the 1970s, a concept called the three schema approach (or concept) was introduced and thoroughly researched. G.M. (Sjir) Nijssen was one of the driving forces behind most of the research in this area. Nijssen wrote numerous articles on this topic. Again, abstraction and encapsulation were the driving forces.

A personal note: In 1979 I started my IT career working for Nijssen in Brussels, Belgium, when all that research was going on. I didn't realize it at that time, but obviously data virtualization has played a role in my career from day one.

Technologically, data virtualization owes a lot to distributed database technology and federation servers. Most of the initial research for data federation was done by IBM in their famous System R* project which started way back in 1979. Another project that contributed heavily to distributed queries, was the Ingres project which eventually led to the open source SQL database server called Ingres, now distributed by Actian Corporation. System R* was a follow-up project to IBM's System R project--the birth place of SQL. Eventually, System R led to the development of most of IBM's commercial SQL database servers, including SQL/DS and DB2.

The forerunners of data virtualization servers can not be omitted here. The first products that deserve the label data federation server are IBM's DataJoiner and Information Builder's EDA/SQL (Enterprise Data Access). The former was introduced in the early 1990s and the latter in 1991. Both were not database servers, but were primarily products for integrating data from different data sources. Besides being able to access most SQL database servers, they were the first products to provide a SQL interface to non-SQL databases. Both products have matured and have undergone several name changes. After being part of IBM DB2 Information Integrator, DataJoiner is currently called IBM InfoSphere Federation Server, and EDA/SQL has been renamed into iWay Data Hub and is part of Information Builders' Enterprise Information Integration Suite.

I could list more technologies, research projects, and products that have been fundamental to the development of data virtualization, but I will stop here. This already impressive list clearly shows the long history and the serious amount of research that has gone into data virtualization and its forerunners. So, maybe the term data virtualization is young, but the technology definitely isn't. Therefore, classifying data virtualization as young and immature, would not be accurate.

Note: If you have questions related to data virtualization, send them in. I am more than happy to answer them.


Posted November 12, 2012 7:50 AM
Permalink | No Comments |
Let's change the word big (in big data) to an acronym, so that BIG data stands for Business Intelligence Generated data. The reason for this proposal is that many are struggling with the term big data, myself included. There is a lot of confusion, because there is no generally accepted definition. We all know it's about large quantities of data, high velocity data, and/or a wide variety of data. But then still, what is a large quantity? When is it high and when low velocity? For some, big data is highly structured sensor data (machine generated data), for others it's textual unstructured data coming from social media, and there are those who say it's semi-structured data stored in, for example, weblogs.

The fact that the word big is a relative quantity doesn't help either. What big is for a midsize European company, can be medium for a large US company. And is it really about the amount of the data? Or is it more about what we do with it, for example, we analyze that data (regardless of the quantity). The V's (Volume, Velocity, Variety, Variation, Visibility, and Value--I've lost count of how many V's there are) are mentioned regularly to describe when something qualifies as big data.

Some have presented definitions, but I haven't seen an acceptable one yet. One author used the following definition: big data is data that is too much for a SQL database. This makes no sense. For example, there are plenty of multi-terabyte systems that everyone would classify as big data systems and that can be handled by SQL products more than satisfactorily.

Lastly, enough data is enough data. The quality of an analytical result doesn't always increase when the amount of data increases. Data quality is often more important than data quantity.

Conclusion, confusion rules when it relates to the concept of  big data.

In this blog I look at big data systems from a different angle in the hope that this helps to clarify this muddled concept.

Undeniably, processing large quantities of data is a common characteristic of most big data systems, but there is another one, and that is that most of such systems combine characteristics of production systems and of BI systems. In a sense each big data system is a production system, because it collects and stores new data, plus it's a BI system, because this new data is not collected to support business processes, but the primary intention is to use it for some form of analytics, possibly embedded analytics (analytics embedded within production systems), operational analytics, or predictive analytics. With new data I mean data that is not collected and stored by the organization yet, and in many cases it's also a new type of data. For example, a big data system developed by a retail company may be gathering camera data for tracking customer routes through their stores. Or, a big data system of a large international electronics firm may collect unstructured social media data for sentiment analysis.

Traditionally, new data is entered with and processed by production systems, such as a general ledger, cash management, and claim processing systems. These systems are, however, not designed to support analytics, but are designed to support business processes. In fact, when they were designed, the focus was definitely not on analytics, but on supporting data entry. This is why it's sometimes so hard when developing BI systems to extract the right data from those production databases for analytical and reporting purposes--staging areas have to be developed, ETL and replication processes have to be designed, and so on. This is still true today: the designers of new production systems don't think about how the organization can use the data for analytical purposes.

In other words, what makes big data systems special is that they are hybrid systems, they are production systems and BI systems. In my opinion, this is what makes big data applications special--and, evidently, most of them collect massive amounts of data to supports the required forms of analytics.

So, maybe we should redefine the term big data. Let's begin by not associating the word big with a relative quantity anymore, but let's change the word big to an acronym, so that BIG data stands for Business Intelligence Generated data--data generated and stored with the primary purpose to analyze it. Thus, a big data system is a system that generates, collects, stores, and processes data specifically to support business intelligence. Subsequently, big data is data managed by a big data system.

Hopefully, by redefining the term big data it becomes more obvious what is meant with this promising category of systems and gets rid of some of the confusion.


Posted October 16, 2012 1:45 PM
Permalink | 2 Comments |
In a series of blogs, I am answering some of the questions a large US-based, health care organization had on data virtualization. I decided to share some of their questions with you, because some of them represent issues that many organizations struggle with.

Their question: "Isn't data virtualization by definition slow, because it's an extra layer of software? And doesn't all the federation, integration, transformation, and cleansing of data that has to take place on-demand, slow down each query?"

This is a question I can't disregard in this series, because performance is an aspect that always worries people when they hear about data virtualization for the first time. In addition, I received a comment on the first blog in this series, which was related to performance.

There is much to say about the performance of data virtualization servers, but because this is a blog, I focus on the key issues.

First of all, some think that the performance of a data virtualization server is by definition poor, because it's accessing source production systems, and not a data warehouse, a data mart, or some other database that is designed and optimized for reporting. It's true that retrieving data from source systems can lead to performance problems. These systems may not have been designed or optimized to run BI queries, or the transaction workload they have to process is so intense that running queries on them as well, can cause serious interference. Therefore, in most cases this is not the recommended approach. A better approach is to design a data warehouse and let the data virtualization server access that data warehouse and not the production systems. Data virtualization does not exclude a data warehouse; also see my blog Do We Still Need A Data Warehouse?

Second, because data virtualization evolved from data federation, some think that data virtualization is only worthwhile when data from multiple data sources is retrieved and integrated--only useful when data is federated. Because data federation can be a resource hungry operation, it can therefore be slow. Evidently, all data virtualization servers do support data federation, and they have various techniques to optimize this federation process. But data virtualization servers are not only useful when data federation is needed. In many systems, data virtualization is used even when each query is a non-federated query. In this case, the strength of data virtualization is encapsulation (hiding all irrelevant technical details of the data stores) and abstraction (showing only relevant data, with the right structure, and on the right aggregation level).

Third, to make access to data sources as efficient as possible, many optimization techniques are implemented in data virtualization servers. For example, join optimization techniques, such as ship joins, query substitution, query pushdown, and query expansion, are implemented to make data access as efficient as possible. These techniques are all very mature and have proven their worth. In fact, research has been going on in this area since the days of the famous IBM's System R* and Ingres projects. Both projects started way back in the 1970s. And research continues--new techniques are still discovered to optimize data access.

Another example of a technique that improves performance is caching. With this technique, the contents of virtual tables (the key building blocks of data virtualization servers) are stored. This means that when a virtual table is accessed, the result is not retrieved from the underlying data sources, but from a cache. The effect is that access of the data sources is not required, data transformation doesn't have to take place, and no data has be integrated or cleansed. No, the data is ready to go. It's like picking up a pizza from a take-away restaurant where you phoned-in your order 30 minutes on forehand.

More and more data virtualization servers offer all kinds of features to store and access caches efficiently. For example, some allow the cache to be stored in the fastest analytical database servers available. It's to be expected that in the near future, data virtualization servers will also support in-memory database servers for storing caches. Undeniably, this will speed up query processing even more, because accessing those cache will involve no I/O.

To summarize, some have worries about the performance of data virtualization servers, but quite regularly those worries are based on the wrong assumptions, such as data virtualization excludes a data warehouse. Data virtualization servers offer enough optimization techniques to process the majority of today's queries fast. However, if you want to run queries that join historical data stored in a data warehouse with data coming from a sentiment analysis executed on textual information straight from Twitter, and join that with production data that still has to be cleansed heavily, then yes, you will have a performance challenge.

Note: For more information on data virtualization, such as query optimization and caching, I refer to my new book "Data Virtualization for Business Intelligence Systems" available from Amazon.


Posted October 15, 2012 1:24 AM
Permalink | No Comments |
In a series of blogs I answered the questions on data virtualization coming from a particular organization. The following question is not coming from them, but since I've heard the question being asked so many times, I decided to include it in this series.

Question: "If we adopt data virtualization, can we throw away the data warehouse, because we can access the data in the production databases straight on, right?"

Wrong! Data virtualization is not some data warehouse killer. In most projects, where data virtualization is deployed, you will still need a data warehouse. In many systems, if no data warehouse it developed, it won't be possible to implement the information needs of many reports. Let me give the two key reasons:

  • Most production systems do not contain historical data. They were not designed to keep track of historical data. If a value is changed, the old value is deleted. For reports that need to do trend analysis, those deleted values may be needed. Thus, those values have to be stored somewhere. And this is where the data warehouse comes in: data warehouses are needed to store historical data.
  • Production systems may contain inconsistent data. One system may say that a customer is based in New York, while the other system indicates that he is based in Boston. Inconsistencies can't always be solved using software, sometimes human intervention is required to indicate what the correct value is. The result of that intervention must be stored somewhere, so that it can be reused. Again, that's where a data warehouse comes in.
And there are more reasons why an additional database is needed: the data warehouse. If that data warehouse would not exist, and if the data virtualization server is connected to the production systems, it would have no idea how to retrieve the historical data because it wouldn't exist, and it would not know how to determine which of the inconsistent values is the right one.

Worthwhile to mention is that if a data warehouse system consists of a data warehouse and deploys data virtualization, then (physical) data marts may not be needed anymore when they contain data derived from the data warehouse. Such data marts can be simulated by the data virtualization server. We usually refer to them as virtual data marts.

So, introducing data virtualization in a data warehouse system does not imply throwing away the data warehouse. The data warehouse is still needed.

Note: For more information on data virtualization, I refer to my new book "Data Virtualization for Business Intelligence Systems" available from Amazon.


Posted September 27, 2012 12:04 PM
Permalink | No Comments |