We use cookies and other similar technologies (Cookies) to enhance your experience and to provide you with relevant content and ads. By using our website, you are agreeing to the use of Cookies. You can change your settings at any time. Cookie Policy.


Blog: David Loshin Subscribe to this blog's RSS feed!

David Loshin

Welcome to my BeyeNETWORK Blog. This is going to be the place for us to exchange thoughts, ideas and opinions on all aspects of the information quality and data integration world. I intend this to be a forum for discussing changes in the industry, as well as how external forces influence the way we treat our information asset. The value of the blog will be greatly enhanced by your participation! I intend to introduce controversial topics here, and I fully expect that reader input will "spice it up." Here we will share ideas, vendor and client updates, problems, questions and, most importantly, your reactions. So keep coming back each week to see what is new on our Blog!

About the author >

David is the President of Knowledge Integrity, Inc., a consulting and development company focusing on customized information management solutions including information quality solutions consulting, information quality training and business rules solutions. Loshin is the author of The Practitioner's Guide to Data Quality Improvement, Master Data Management, Enterprise Knowledge Management: The Data Quality Approachand Business Intelligence: The Savvy Manager's Guide. He is a frequent speaker on maximizing the value of information. David can be reached at loshin@knowledge-integrity.com or at (301) 754-6350.

Editor's Note: More articles and resources are available in David's BeyeNETWORK Expert Channel. Be sure to visit today!

I recently came across a curious overloaded use of a database table attribute: one column, called "Verification Status Code" contained a code indicating the result of a process of verifying the connection between a customer identification number and a supplied customer name. The attribute took on some values such as:

"The customer identifier and name were correctly verified as identical to our records"
"A corrected identifier was provided for the supplied customer name"
"The customer identifier and name were matched using the ALPHA process"
"The customer identifier and name were matched using the BETA process"
"The customer identifier and name could not be verified"

Apparently, the codes used indicate two pieces of information. The first is whether the name and identifier were correctly verified within the system or not, and the second was the process used to correctly verify the data. This suggests an embedded business rule associated with the application, in that it first checks to see whether the code is one that indicates verified data, and then it performs different actions based on which process was used.

Anyone have any other experiences with this kind of overloading? Let me know - I will add this as a rule class to my business rule-based data quality techniques. Email me (loshin@knowledge-integrity.com)


Posted November 22, 2005 12:34 PM
Permalink | 2 Comments |

2 Comments

I'm quite familiar with this and similar overloading techniques. I believe they are common

in the programming of legacy systems and have usually been implemented to avoid redesign or

because the designer believes that it will be more efficient to store fewer values

associated with some sort of process or transaction audit record. The case is usually that

it creates headaches for the programmer and gross inefficiencies when processing this audit

data for reporting or ETL purposes.
My question is in regards to the scope of your data quality practice: is it within your
purview to track perceived or possible impact of such data quality issues and to make
design-level (re-engineering) recommendations for mitigating the risk of such circumstances?

The question is more about whether the reference dta set in question is one that should bemanaged as an enterprise asset, in which all of managers of the applications that use the value domain agree to the meaning associated with that value domain (end associated encodings). Since that does impact data quality, teh simple answer is yes, there should be design-level recommendations when reengineering to ensure that all master reference data is subject to a data standards process.

Leave a comment

    
   VISIT MY EXPERT CHANNEL

Search this blog
Categories ›
Archives ›
Recent Entries ›