The Data Quality Program Life Cycle

Originally published September 18, 2006

In the past, when I was working as a software engineer, I was assigned to “bug duty”– reviewing reported software problems, tracking down the origination of problems in the application’s source code and then suggesting a fix to the developers. The bug list was seemingly never-ending, but as we worked through the list, there were two interesting counterintuitive phenomena that emerged. The first was that tracking down and fixing one reported bug often resulted in the correction of some other problems reported to the bug list. The other was that even though we eliminated some problems, as these issues were resolved, new issues began to crop up from the standard test suites.

Sitting back and thinking about this actually provides some insight into the process, and ultimately suggests an interesting idea about planning for any quality management program. There are good explanations for both of these results, and, as we will see, examining the life cycle of the quality management process should help in developing a winning argument for the support of these programs.

Let’s consider the first by-product, where fixing one problem resulted in other problems mysteriously disappearing. Apparently, even though more than one issue had been reported, they all shared the same root cause. Because the people reporting the issue only understood the application’s functionality (but did not have a deep knowledge of how the underlying application was designed or how it worked), each issue was perceived to be different whenever the results or side effects differed. Yet when issues did share the same root cause, the process of analyzing, isolating and eliminating the root cause of the failure also eliminated the root cause of the other failures. The next time the tests were run, the other issues sharing the same root cause no longer failed due to that issue.

The second by-product is a little less intuitive. One would think that by finding and fixing problems, the result should be fewer issues when, in fact, the fix initially resulted in a greater number of issues. What actually happened was this: fixing one reported problem enabled a test to run past the point of its original failure, allowing it to fail at some other point in the process. Of course, this (and every other newly uncovered) failure would need to be reported to the bug list, which led to an even longer list of issues.

The most revealing phenomenon, though, was that eventually the rate of the discovery of new issues stabilized and then decreased, while the elimination of issues continued to shorten the bug list. Because we had prioritized the issues based on their relative impact (loosely defined as “causing the biggest problems with the best customers”), as we eliminated problems, the severity of the remaining issues was significantly lower as well. At some point, because the effort that needed to be expended on researching the remaining issues exceeded the value achieved by fixing them, we were able to reduce the amount of time allotted to bug duty. This practical application of the Pareto principle demonstrated how reaching the point of diminishing returns allowed for better resource planning while reaping the most effective benefits.

Although this experience was based on software development, because the data quality issue analysis process is very similar to the bug analysis process, there are some lessons to be learned with respect to data quality management:

  1. Subjecting a process to increased scrutiny is bound to reveal significantly more flaws than originally expected;

  2. Initially, additional resources will be necessary to address most critical issues;

  3. Eliminating the root causes of one problem will probably fix more than one problem, improving quality overall; and

  4. There is a point at which the resource requirement diminishes because the majority of the critical issues have been resolved.

These points suggest a valuable insight that there is a life cycle for a data quality management program. Initially, there will be a need for more individuals focusing a large part of their time in researching and reacting to problems, but over time there will be a greater need to have fewer people concentrate some of their time on proactively preventing issues from appearing in the first place. In addition, as new data quality governance practices are pushed out to others across the organization, the time investment is diffused across the organization as well, further reducing the need for long-term dedicated resources.

Knowing that the resource requirements are likely to be reduced over time may provide additional business justification to convince senior managers to support establishing a data quality program. Developing a program plan that incorporates this eventual reduction in staffing will depend on the ability to transform from a reactive to a proactive organization with respect to data quality, and we’ll revisit that concept in an upcoming article.

Recent articles by David Loshin

 

Comments

Want to post a comment? Login or become a member today!

Be the first to comment!