We use cookies and other similar technologies (Cookies) to enhance your experience and to provide you with relevant content and ads. By using our website, you are agreeing to the use of Cookies. You can change your settings at any time. Cookie Policy.


Blog: Dan E. Linstedt Subscribe to this blog's RSS feed!

Dan Linstedt

Bill Inmon has given me this wonderful opportunity to blog on his behalf. I like to cover everything from DW2.0 to integration to data modeling, including ETL/ELT, SOA, Master Data Management, Unstructured Data, DW and BI. Currently I am working on ways to create dynamic data warehouses, push-button architectures, and automated generation of common data models. You can find me at Denver University where I participate on an academic advisory board for Masters Students in I.T. I can't wait to hear from you in the comments of my blog entries. Thank-you, and all the best; Dan Linstedt http://www.COBICC.com, danL@danLinstedt.com

About the author >

Cofounder of Genesee Academy, RapidACE, and BetterDataModel.com, Daniel Linstedt is an internationally known expert in data warehousing, business intelligence, analytics, very large data warehousing (VLDW), OLTP and performance and tuning. He has been the lead technical architect on enterprise-wide data warehouse projects and refinements for many Fortune 500 companies. Linstedt is an instructor of The Data Warehousing Institute and a featured speaker at industry events. He is a Certified DW2.0 Architect. He has worked with companies including: IBM, Informatica, Ipedo, X-Aware, Netezza, Microsoft, Oracle, Silver Creek Systems, and Teradata.  He is trained in SEI / CMMi Level 5, and is the inventor of The Matrix Methodology, and the Data Vault Data modeling architecture. He has built expert training courses, and trained hundreds of industry professionals, and is the voice of Bill Inmons' Blog on http://www.b-eye-network.com/blogs/linstedt/.

Ok, now that we've introduced the concept let's walk through some examples of complex business processes, and dirty data. Let's find out just what we can do about starting to solve some of these problems. Furthermore, let's explore the real issue of "broken" business processes, do you have some of these in your organization?

So profitability is tied to complexity of business processes coupled with dirty data coupled with too much manual intervention. What exactly does this look like?

Here's an example:
Suppose a customer calls Sales, and says: I would like product X with the following configurations: CA, CB, CC. Sales begins tracking the customer, captures some information (hopefully not fat-fingered) about the customer and their contact point, along with the product and configuration. The customer is then assigned an account number: SLS123.

The customer wants to know approximately when this will be built, and shipped, or if there are ways for them to track the product through it's build cycle. The business says: well, we can only track it once it's shipped to you, and we can't estimate it's cost or it's build time until we have designed the custom parts. Customer says: fair enough, when will you have a design complete? Sales says: can we get back to you in a week?

Ok - sales has the customer contact, they qualify the lead through a number of manual intervention processes before passing it off to Finance. Finance takes SLS123 and changes the account number to FIN123. Now I ask you, is there any traceability in this simple example across Sales And Finance at a corporate level? No, not unless someone in finance or sales records the customer account number change (from/to).

Finance runs it through it's paces, approves financial lending, and then passes it off to contracts who runs it through a series of complex business processes with manual intervention. By the way, contracts changes the account number from FIN123 to CON456. The customer finally gets a call 3 weeks later stating they have a contract for the customer to sign. But before they can give a delivery date they need planning to run the manufacturing phase through their systems, so off it goes.

Another two weeks and planning returns to Contracts to provide an estimated build plan and date. We're already 5 weeks from initial contact, and by the way the customer has put the same bid in to our competitors. 3 weeks ago, our competitor returned the bid and build ETA to the customer. We call the customer back and they say: sorry, your competitor won the bid. We lose $300 Million dollars.

What happened? Our complex business process has not been optimized or stream-lined. There were unnecessary hand-offs between manual intervention, and alternate business units in order to win the business. Imagine if Sales were empowered to a) check financial standing b) run the contract up against previous builds of similar nature (data mining with confidence levels), c) run this by a financial analyst and contracts approval individual - all within 2 days, and return to the customer.

This would be a) a more profitable business, b) cheaper to handle contracts and approve financials c) single out contracts that are too difficult, not our sweet spot, or specialized enough to warrant higher prices d) make us highly nimble and competitive.

In order to get there, we must a) reduce the number of touch points on the data b) utilize data mining tools in an active warehouse to enable insight at the sales contact level c) simplify/streamline the business processes between customer contact, estimation, finance, and contracts approval - which means Cycle Time Reduction, and business process critical path analysis.

Think of the business processes, both mechanical data touch points, and manual data touch points as a graph of 2D lines (x,y coordinants). Complexity of the process going from A to B is the rise/run or Y coordinant. The X coordinant is the process number. Then graph the business processes as best as possible. Finally begin to analyze the graph for critical path - attempting to eliminate touch points, and reducing complexity of the business processes (reducing the Y) to end up with as "straight a line as possible".

Keep in mind that changing keys to information doubles complexity, even if the changes are recorded. I think you'll be delightfully surprised. All companies who undertake this effort can save millions of dollars a year with 1/2 the investment, furthermore this drives the quality up, profitability up, complexity down, overhead down and time to deliver speeds up. Result? More satisfied customers, the business is more nimble.

Now let's take a look at the dirty data problem (which we'll explore further in Part 3). The first problem is we need an enterprise view of this customer, even if it has to span business SECTORS, and not just companies within those sectors. This will be the ONLY way to roll up a single customer and pinpoint exactly where their deliveries are within the entire organization. Sometimes this is referred to as the Data Supply Chain (Jill Dyche, Baseline Consulting TDWI 2005).

What if we kept the SAME customer account number throughout all processes? We can pinpoint exactly where in the data supply chain their application is, and we can begin tracking and monitoring (metrics, KPA/KPI) on the efficiency of the business process. Ahh you say, we have that in place! Ok, but what happens when you re-bill a customer? Do your systems change the Invoice Number? It's the same problem, different data.

Paradigm Rule #1:
1. KEYS to information within the organization must remain consistent over time.

So business keys are extremely important to start with as a metric in business profitability. If you can start with pinpointing the places where keys are changed throughout the business, you can begin identifying major breaks in the data supply chain.

We'll dive deeper into these concepts in Part 3. Thanks, By the way, TDWI - November, Orlando - come see the Data Vault Data Modeling in play, or read about it at: www.DanLinstedt.com


Posted May 26, 2005 6:11 AM
Permalink | No Comments |

Leave a comment

    
Search this blog
Categories ›
Archives ›
Recent Entries ›