We use cookies and other similar technologies (Cookies) to enhance your experience and to provide you with relevant content and ads. By using our website, you are agreeing to the use of Cookies. You can change your settings at any time. Cookie Policy.


Blog: Rick van der Lans Subscribe to this blog's RSS feed!

Rick van der Lans

Welcome to my blog where I will talk about a variety of topics related to data warehousing, business intelligence, application integration, and database technology. Currently my special interests include data virtualization, NoSQL technology, and service-oriented architectures. If there are any topics you'd like me to address, send them to me at rick@r20.nl.

About the author >

Rick is an independent consultant, speaker and author, specializing in data warehousing, business intelligence, database technology and data virtualization. He is managing director and founder of R20/Consultancy. An internationally acclaimed speaker who has lectured worldwide for the last 25 years, he is the chairman of the successful annual European Enterprise Data and Business Intelligence Conference held annually in London. In the summer of 2012 he published his new book Data Virtualization for Business Intelligence Systems. He is also the author of one of the most successful books on SQL, the popular Introduction to SQL, which is available in English, Chinese, Dutch, Italian and German. He has written many white papers for various software vendors. Rick can be contacted by sending an email to rick@r20.nl.

Editor's Note: Rick's blog and more articles can be accessed through his BeyeNETWORK Expert Channel.

April 2013 Archives

Data replication tools have been available since the 1990s.  They have been used primarily to increase the availability and scalability of IT systems and their data. Nowadays, they are also used to replicate data to data warehouses for supporting operational BI. Besides being able to efficiently and non-intrusively replicate data from source to target systems, a powerful feature has always been that they can operate in heterogeneous environments, in which the source and targets are different products. But they have always limited themselves to SQL or SQL-like systems. An intriguing question is whether it is difficult for these data replicators to support the new generation of NoSQL systems? For example, will we be able to use them to replicate data stored in a NoSQL system to a staging area or data warehouse?

Many of the NoSQL systems have built-in data replication features--data is automatically stored multiple times. In fact, the developers can set how many replicas have to be created. However, the replication features of NoSQL systems are limited to a homogenous environment. It's not possible to use these features when, for example, data has to be replicated from a NoSQL system to a classic SQL system.

Today, most data replication products can't replicate from or to NoSQL systems. However, if they can in the future, what will be important is that they handle the non-relational concepts of NoSQL systems efficiently. The keyword here is efficiently. Most existing data replication tools have been designed and optimized to copy data between SQL systems. So, they have been optimized to efficiently process relatively short records with a fixed structure. However, NoSQL records are not always short and fixed with respect to structure. NoSQL systems support a wide range of concepts:

  • Many NoSQL systems, including the key-value stores, the document stores, and the column-family stores support extremely long records. These records can be magnitudes longer than what is common in SQL systems. Current data replicators have been optimized to replicate short records.
  • Almost all NoSQL systems support tables in which each record can have a different structure. This is new for data replication products. For example, what will that do to compression algorithms that assume that all records have the same structure?
  • Document stores and column family stores support hierarchical structures. If that type of data has to be replicated into SQL systems, it has to be flattened somehow. The challenge is to do that very fast. But can it be done fast enough? Data replicators are usually not strong at transformations, because it slows down the replication process too much.
  • Column-family stores support what the relational world used to call repeating groups. The same as for hierarchical structures, how can they be mapped to relational structures by the data replication tools efficiently.
There is no question about whether we need data replication technology to replicate between NoSQL and SQL systems. But the key question is whether it can do this efficiently. This is more than adding one extra source to their list of supported products. It requires a substantial redesign of the internals of these products. This is the challenge these vendors are confronted with in the coming years. Hopefully, they will not claim to support NoSQL, while in fact they only replicate data from NoSQL systems if that data has a relational form.

Note: For more on this topic, see the whitepaper Empowering Operational Business Intelligence with Data Replication.


Posted April 29, 2013 1:16 AM
Permalink | No Comments |
Are you interested in speaking at the Data Warehouse & Business Intelligence European Conference in London coming November? If you are, please fill in the call for speakers.

Previous editions were very successful and attracted more than 200 delegates. Evaluations showed that the attendees were very pleased with the selected speakers, the topics, and setup of the conference.

The 2013 edition is aimed at all aspects of data warehousing and business intelligence, including: trends, design guidelines, product overviews and comparisons, best practices, and new evolving technologies. And like the previous years, the conference is organized together with the highly successful European Data Management and Information Quality Conference.

With this year's call for presentations we are trying to attract proposals for sessions on traditional and future data warehousing and business intelligence aspects. Delegates have expressed a preference for the use of case studies rather than theoretical or abstract topics. We would particularly like practitioners in the field to respond to this call for papers. We encourage new speakers to apply. Success stories - case studies where data warehousing and business intelligence have produced real bottom-line benefits are very much appreciated.

Example topics for proposals are:

  • Agile BI
  • Big data analytics
  • BI in the cloud
  • Data modelling for data warehouses
  • NoSQL in a data warehouse environment
  • The logical data warehouse
  • Data virtualization and data federation
  • The maturity of analytical database servers
  • Star schema, snowflake and data vault models
  • Selling business intelligence to the business
  • The relationship between master data management and data warehousing
  • Guidelines for using ETL tools
  • Operational BI and real-time analytics
  • BAM (Business Activity Monitoring) and KPI (Key Performance Indicators)
  • BI scorecards
  • Customer analytics and insight
  • Text mining and text analytics
  • Open source BI
  • Corporate Performance Management
I am looking forward to your proposal, and hope to see you in London coming November.

Rick F. van der Lans
Chairman of the Data Warehouse & Business Intelligence European Conference 2013


Posted April 15, 2013 8:38 AM
Permalink | No Comments |

About two months ago, Pervasive Software asked me to write a whitepaper describing how well their popular PSQL database server supports Codd's twelve rules for relational databases.

For those not familiar with these rules, in 1985, E.F. (Ted) Codd, the founder of the relational model, defined a set of twelve rules for determining how well a database product supports the relational model. These rules make it possible to answer the question whether a particular product is a relational database server. They were urgently needed, because many vendors were labeling their products as relational, while they were not. So, the term relational became somewhat polluted and Codd wanted to fix and prevent this.

The study was a real trip down memory lane. It was a pleasure to reread all those articles and books written by Codd himself and those by Chris Date on, for example, updatable views. The work they did then, was brilliant. So much of what they wrote, is after so many years, still very true.

After studying Pervasive PSQL in detail, my verdict is that it scores a 10 (on a scale of 0 to 12). Nine rules are fully supported, two partially, and two not. Therefore, the overall conclusion is that PSQL is 83% relationally complete. This is an excellent score and puts PSQL in the list of most relational products.

Is it possible to be 100% relational? The answer is yes. Such products can be developed. In fact, there is one open source product that supports most of the rules: Alphora's DataPhor. However, the product is not (yet) a commercial success. In the same year when Codd introduced the twelve rules, he also wrote "No existing DBMS product that I know of can honestly be claimed to be fully relational, at this time." It looks as if this statement still holds for all the SQL products and probably for most database servers.

Note: Now that Pervasive and Actian have merged, maybe I should write a comparable paper for their Ingres and Vectorwise database server, and see which one is the most relational product.


Posted April 15, 2013 7:40 AM
Permalink | No Comments |