We use cookies and other similar technologies (Cookies) to enhance your experience and to provide you with relevant content and ads. By using our website, you are agreeing to the use of Cookies. You can change your settings at any time. Cookie Policy.

Blog: William McKnight Subscribe to this blog's RSS feed!

William McKnight

Hello and welcome to my blog!

I will periodically be sharing my thoughts and observations on information management here in the blog. I am passionate about the effective creation, management and distribution of information for the benefit of company goals, and I'm thrilled to be a part of my clients' growth plans and connect what the industry provides to those goals. I have played many roles, but the perspective I come from is benefit to the end client. I hope the entries can be of some modest benefit to that goal. Please share your thoughts and input to the topics.

About the author >

William is the president of McKnight Consulting Group, a firm focused on delivering business value and solving business challenges utilizing proven, streamlined approaches in data warehousing, master data management and business intelligence, all with a focus on data quality and scalable architectures. William functions as strategist, information architect and program manager for complex, high-volume, full life-cycle implementations worldwide. William is a Southwest Entrepreneur of the Year finalist, a frequent best-practices judge, has authored hundreds of articles and white papers, and given hundreds of international keynotes and public seminars. His team's implementations from both IT and consultant positions have won Best Practices awards. He is a former IT Vice President of a Fortune company, a former software engineer, and holds an MBA. William is author of the book 90 Days to Success in Consulting. Contact William at wmcknight@mcknightcg.com.

Editor's Note: More articles and resources are available in William's BeyeNETWORK Expert Channel. Be sure to visit today!

June 2011 Archives

One of my favorite blog entries was the one about the relational data page.  In that entry, I talked about how so much of the data allocated to a database is formatted.  Some people agreed but pointed out that also much storage is dedicated to index pages.  And they are correct.  It depends on your index strategy.  If you add up all the index sizes on some tables, it can exceed the row size itself.  Then, likewise the index pages would outnumber the data pages in the database.  What about them and what do they look like?

There are two basic page formats for any index page, which, like the data page, has size options for the user.  I'll repeat my earlier admonition that knowing what goes on at the page level will help you understand better how your decisions affect your performance.   

I'll start with the most prominent format - that of the leaf page.  The leaf page contains a series of key/RID(s) combinations (called entries.)  The key is the actual value of the column.  RID stands for row/record ID and is comprised of the data page number and the record number within the page.  The RID was explained in this post.  The RID is how the index is connected to the data pages.  All RIDs that connect to the 3rd record on the 123rd data page would be "123-3".  If the record there was a customer record for Brad Smith who lives in Texas and there was an index on state, the key/RID combination would be "Texas-(123-3)".

Naturally, you would have multiple customers who live in Texas so there would be multiple RIDs in the state index associated with Texas.  It might look like "Texas-(123-3),(123-4),(125-6),(125-19),(127-10), etc.".  Any index key that shows up multiple times in the table would have multiple RIDs.  A unique index would only have one RID associated to each key.

Successive entries in an index would not be in order except for the one clustered index on the table.  For example, an index on last name could have entries of:





This is NOT for a clustered index. If it were, the RIDs would be in numerical order across entries.  Most indexes are non-clustered and it is normal for the RIDs to jump all around the table.  If you navigate quickly to the Chambers entry, data page 67, record 9 is where you would find "the rest of the record".  This is excellent for a query like "Select * from table where lastname = 'Chambers'".

But what about that navigation?  That comes about from the other index page format - called creatively the non-leaf page.  The non-leaf pages contain key ranges of the leaf pages so that an index navigation, which always begins at the root node, can quickly navigate to the correct leaf page.  That is the function of the non-leaf pages. 

In practice, this quickly fills up a single index page (of a few "K" bytes) and then the entries are split into two non-leaf pages and a "root" page now points to the ranges on those pages.  Eventually the root

page fills up, spilts again and a new level is created.  The more levels, the more non-leaf page accesses to get to the leaf page.  All index access involves getting to the leaf page.  These non-leaf pages are mostly kept in memory. 

That's the indexing pages and process in a tiny nutshell.  It does set us up for talking about what DB2 is doing in 9.7 with indexes.  They're compressing them.

Compression in general is another way to circumvent the I/O problem.   When compression is spoke about, it is almost always about table, not index, compression.  In DB2 9.7, however, there are 2 forms of compression happening in the index leaf pages.  One is happening to the RID list.

As you may have noticed above, RIDs are in sequence within a key.  Instead of storing a full RID (which in actuality for many indexes is 6 bytes - 4 for the page number and 2 for the ID), DB2 9.7 can store the first RID and the offsets - both page and record number -  for all successive RIDs for the key.

For example, the Chamberlin entry would be "Chamberlin-(234-2),(3),(102,-2)".  The (3) represents adding 3 to the record number of the previous RID (while leaving the page number the same).  The (102,-2) represents adding 102 to the previous page number and subtracting 2 from its record number. This small example may not look like it's much savings, but consider that the 234 is really stored as 4 bytes and the (3) is stored as only 1.  And the record number, usually stored as 2 bytes is replaced by -2, stored as one byte.  Bits internally help DB2 understand if the byte it's looking at is a key, a RID, a compressed page number or a compressed record number.

This is obviously best for those indexes with long many repeat values and long RID lists.

The other innovation in index compression in 9.7 is called Prefix Compression, which applies to the key itself.  DB2 essentially "bit maps" all the common prefixes on a leaf page in the header section of the leaf page.  So these last name values:




with a common prefix of "Chamb" could be reduced to:




within the leaf page.

For keys with densely sequential values like this with common prefixes, this is going to pack more, potentially a lot more, index keys down on a leaf page.  Many mutli-column indexes have lots of repeat/common values in the higher level columns of the index.  These would be compressed out with Prefix Compression. The upshot is more keys in the dreaded I/O.

Indexes will automatically be compressed on any table which is compressed.

Indexing is mightily important.  So is compression.  These 2 techniques are now coming together to provide more capabilities for database workload performance.

Posted June 15, 2011 7:31 AM
Permalink | No Comments |


Search this blog
Categories ›
Archives ›
Recent Entries ›