We use cookies and other similar technologies (Cookies) to enhance your experience and to provide you with relevant content and ads. By using our website, you are agreeing to the use of Cookies. You can change your settings at any time. Cookie Policy.


Blog: Dan E. Linstedt Subscribe to this blog's RSS feed!

Dan Linstedt

Bill Inmon has given me this wonderful opportunity to blog on his behalf. I like to cover everything from DW2.0 to integration to data modeling, including ETL/ELT, SOA, Master Data Management, Unstructured Data, DW and BI. Currently I am working on ways to create dynamic data warehouses, push-button architectures, and automated generation of common data models. You can find me at Denver University where I participate on an academic advisory board for Masters Students in I.T. I can't wait to hear from you in the comments of my blog entries. Thank-you, and all the best; Dan Linstedt http://www.COBICC.com, danL@danLinstedt.com

About the author >

Cofounder of Genesee Academy, RapidACE, and BetterDataModel.com, Daniel Linstedt is an internationally known expert in data warehousing, business intelligence, analytics, very large data warehousing (VLDW), OLTP and performance and tuning. He has been the lead technical architect on enterprise-wide data warehouse projects and refinements for many Fortune 500 companies. Linstedt is an instructor of The Data Warehousing Institute and a featured speaker at industry events. He is a Certified DW2.0 Architect. He has worked with companies including: IBM, Informatica, Ipedo, X-Aware, Netezza, Microsoft, Oracle, Silver Creek Systems, and Teradata.  He is trained in SEI / CMMi Level 5, and is the inventor of The Matrix Methodology, and the Data Vault Data modeling architecture. He has built expert training courses, and trained hundreds of industry professionals, and is the voice of Bill Inmons' Blog on http://www.b-eye-network.com/blogs/linstedt/.

In BI we've seen the trend, it's been written about for over 2 years now. There's a war afoot - across vendor land and between the software makers best-of-breed solutions and the hardware vendors of scalable devices, compliance and storage have partnered up, as have security and storage. As of late, RDBMS vendors and storage have partnered as well.

In the early days of Data Warehousing and BI we saw a split, into best-of breed software vendors, and best-of breed hardware devices. The market got hot, so hot - it exploded, nearly died and is being rebuilt as we speak. What will convergence look like over the next two years? What kinds of devices can we look forward to? What do these super-corporations need to learn or know to move forward?

Let's take a look at some of the vendors and what's happened over the past several years.

With IBM: They've acquired numerous software vendors, both hardware and software to bolster a huge SOA and enterprise integration effort. The list is too large to mention, but here are a few of the products and companies they've bought: NumaQ (Sequent) High performant hardware, MPP distribution, parallelism and partitioning. Informix XPS, for large data sets, high performance database engine practices, parallelism, partitioning. They've integrated these two into IBM hardware and DB2 UDB to make it a super-powerful option for big data volumes and high speed throughput. On another front, they've bought vendors like Ascential, AlphaBlox, and others to bolster IBM Data Integrator and Web Sphere, and their SOA offerings.

IBM isn't the only vendor to move in these directions, Netezza has built an appliance, Sun and HP are entering (or are actively competing in) this area, Microsoft has already begun the same initiatives.

Back to convergence; appliances of today do not offer "everything" that they could to to the enterprise. We still are left to buy bits and pieces and integrate them across the board to make the enterprise vision work (I still remember a time when I was told that "there will never be such a thing as enterprise view, because it's too hard to get everyone to agree as to what that means.") However, let's take a look at what makes the appliances so appealing.

You can buy an RDBMS device, and not "worry" about data modeling, or worry about managing, maintaining, or growing the system - its plug in, load, and go - taking snapshots of existing data sets as they stand. With security devices, it's the same story - plug and go, built in fire-walls, data mining concepts, real-time hack alerts, web-interfaces and management reports along with Web-Site service updates. With compliance devices, again it is the same story. Plug and go, get snapshots of the before & after data changes, find out when and who accessed what - down to the IP Packet level if desired.

What I predict is the continued merging of software and hardware vendors (at the very least, partnerships). Software vendors offering best of breed will begin to produce "firmware" plug & play updatable cards that fit into the back-planes of pre-engineered systems. These systems will include high-speed, tuned I/O, data placement optimization, Appliance like look and feel, and integration between multiple software vendors’ cards across the backplane.

In the future, we will receive firmware updates rather than software updates, and probably be purchasing hardware cards instead of CD's that are customized to meet the needs of the integrated enterprise. If I were to guess, I would say that the following categories of firmware cards will be made:

1. Information Integration cards, handling both real-time and batch loading, backup, extract, restore of information within the system.
2. Data Mining and statistical analysis cards, handling Information Quality, Metrics measurement, data testing, validation, imputing values (profiling and cleansing), and alerting or triggering mechanisms.
3. Web Access front-end cards handling graphical interfaces, user access layers, data distribution, and additional configuration/administration features.
4. SOA / Web Services card, handling the web-services responsibilities.
5. Security/Compliance Card with extended features for replication of "Compliance" based data sets - such that all other cards run through the security layer, providing single logon, and other key features.

The hardware vendor of the initial device will handle high-speed networking, fail-over, compression/decompression, and plug-and-play (either grid computing environments or MPP shared nothing environments).

This I believe is the future device. While software will never disappear completely (due to the "ease" at which it can be created, relative to hardware), the mature products should find their way onto integrated circuit cards.

What's the value add for the software makers to bear the extra expense?
1. No "copies" of the software can be pirated; the hardware card itself must be pirated with replicating hardware
2. Cost of hardware manufacturing will continue to drop, as nanotech encroaches on existing IC technology (driving nanotech/hardware engineering cost up, and existing IC engineering costs down).
3. Strength in partnerships across multi-best of breed providers
4. Higher performance across the board with dedicated hardware options.
5. Licensing issues across "dual-core" vs "single-core" will disappear.
6. Happier customers with a plug and play environment.

What will the customer get from this?
1. Firmware updates instead of software updates
2. Appliance device bundle purchases (pre-configured, pre-tested, plug-and play)
3. Better SMB support
4. Better cross-integration
5. Super high speed devices
6. Lower maintenance cost, lower support costs
7. Better vendor support

Do you think this is possible? If not, why not?

Cheers for now,
Dan L


Posted July 15, 2005 10:55 AM
Permalink | No Comments |

Leave a comment

    
Search this blog
Categories ›
Archives ›
Recent Entries ›