One question they had was related to their development approach: "Would it make sense to design the topmost layer of our virtual tables as close enough as possible to what the reports need? Then, create tables in Microsoft Access with the same table structure as those virtual tables and let the report programmers start building reports against these Microsoft Access tables. Meanwhile, the core team designs the data virtualization model (mapping the virtual tables to the real data sources). This way the two activities (1) programming of reports and (2) building of the virtualization model can be executed in parallel. In two months, when both activities are complete, they meet, and we switch the reports to point to the virtual views rather than to the Microsoft Access tables. Is this a sound approach, or are we stretching it too far?"
My direct response was: "No, you're not stretching it at all, I think you're getting it." What I added was that the only issue might be that minor SQL dialect differences can exist between that of the data virtualization server and Microsoft Access.
Their response: "Good point. We think this can be overcome. We can create the Microsoft Access tables. Then use that as a data source for the data virtualization server, create virtual tables that correspond 1:1 to the tables in Access. Next, have programmers code reports against those virtual tables that point to the Access tables. Finally, when the reports are ready, we redirect the virtual tables to the real data sources."
Evidently, this is the preferred approach, because this means that the reports always access the same virtual tables, even if the switch is made from Microsoft Access to the real data source. This redirecting of the virtual tables is completely transparent to the reports, and they will run unchanged. In addition, redirecting involves almost no work at all.
Just to be clear, I am not recommending Microsoft Access as the preferred platform for developing virtual tables, but I like how this customer is thinking about how to use the power of data virtualization servers to come up with a very efficient and agile development approach. Because data virtualization servers decouple the real data sources from the reports, changes (even drastic ones) can be made to the data sources without having to change the reports, or vice versa. In fact, this is why it's often said that data virtualization makes business intelligence systems more agile.
Note: For more information on data virtualization, I refer to my new book "Data Virtualization for Business Intelligence Systems" available from Amazon.
Posted September 14, 2012 2:09 AM
Permalink | 1 Comment |