We use cookies and other similar technologies (Cookies) to enhance your experience and to provide you with relevant content and ads. By using our website, you are agreeing to the use of Cookies. You can change your settings at any time. Cookie Policy.

Blog: William McKnight Subscribe to this blog's RSS feed!

William McKnight

Hello and welcome to my blog!

I will periodically be sharing my thoughts and observations on information management here in the blog. I am passionate about the effective creation, management and distribution of information for the benefit of company goals, and I'm thrilled to be a part of my clients' growth plans and connect what the industry provides to those goals. I have played many roles, but the perspective I come from is benefit to the end client. I hope the entries can be of some modest benefit to that goal. Please share your thoughts and input to the topics.

About the author >

William is the president of McKnight Consulting Group, a firm focused on delivering business value and solving business challenges utilizing proven, streamlined approaches in data warehousing, master data management and business intelligence, all with a focus on data quality and scalable architectures. William functions as strategist, information architect and program manager for complex, high-volume, full life-cycle implementations worldwide. William is a Southwest Entrepreneur of the Year finalist, a frequent best-practices judge, has authored hundreds of articles and white papers, and given hundreds of international keynotes and public seminars. His team's implementations from both IT and consultant positions have won Best Practices awards. He is a former IT Vice President of a Fortune company, a former software engineer, and holds an MBA. William is author of the book 90 Days to Success in Consulting. Contact William at wmcknight@mcknightcg.com.

Editor's Note: More articles and resources are available in William's BeyeNETWORK Expert Channel. Be sure to visit today!

August 2009 Archives

Netezza's big technology news this week came with an unexpected price fall for the technology.  Whereas Netezza customers to-date have paid around $60,000 per terabyte of storage, Netezza's new TwinFin appliance will go for $20,000 per terabyte of storage.  This storage assumes a 2.25 X compression ratio, which Netezza says is typical and will improve, so figure you are actually storing a little less than half of that in terms of real storage, but the practical application of the price points remains.

In addition to the price drop, the upper limit has been expanded to, depending on who you speak with, 700 terabytes or 1 petabyte.  Either way, it's a big leap and a huge amount of storage that's now possible with Netezza.

Making this all possible are some forklifts and tweaks to the underlying technology.  First and foremost is the switch from the Hitachi drives with 2- or 4-way HP/Intel host CPUs to Intel-based IBM blade servers.  Netezza is taking advantage of the faster chips, bigger disks and better interconnects that have come to market in recent years.  It has also introduced a cache, which will improve the access performance of commonly accessed tables and sections of tables.

The field programmable gate array (FPGA) remains very important in the architecture.  However, the disk controlling function has been removed from the FPGA in favor of an actual disk controller. 

I wrote a description of Netezza technology some time ago that may be worth refreshing on regarding the FPGA:

"The architecture is a shared nothing but there is a major twist.  The I/O module is placed adjacent to the CPU. The disk is directly attached to the SPU processing module.  More importantly, logic is added to the CPU with a Field Programmable Gate Array (FPGA)  that performs record selection and projection, processes usually reserved for relatively much later in a query cycle for other systems.  The FPGA and CPU are physically connected to the disk drive.  This is the real key to Netezza query performance success - filtering at the disk level.  This logic, combined with the physical proximity, creates an environment that will move data the least distance to satisfy a query.  The SMP host will perform final aggregation and any merge sort required.   Enough logic is currently in the FPGA to make a real difference in the performance of most queries."

The Intel adaption, as well as going all-Linux, makes other software more compatible with Netezza.  Obviously one of Netezza's aims is to bring over some of those other DBMS applications - appliance and non-appliance. 

The lowered price point is actually quite important in this rapidly commoditizing field.  And data size is actually a good barometer for measuring price since once you get into the terabytes with an enterprise data warehouse, the workload tends to mix in some similar ways across enterprises.  For those high-data, but specific-use workloads, Netezza will have a high capacity model available soon.  As well, Netezza intends to deliver entry level and a "memory intensive" models.  This strategy is not dissimilar to Teradata's appliance line, already available and at around Netezza's new price points.

This is a very good signal from Netezza - that it is still investing and intends to pursue price/performance for its customers.  At a time when major players like Teradata, with a longer pedigree and half the Global 2000 as customers, have entered the appliance market, and with Microsoft's looming Madison, something was necessary from Netezza.  The question is will Netezza be able to make up for the price drop with significantly more volume in this space they essentially pioneered?


Posted August 5, 2009 9:54 PM
Permalink | No Comments |


Search this blog
Categories ›
Archives ›
Recent Entries ›