We use cookies and other similar technologies (Cookies) to enhance your experience and to provide you with relevant content and ads. By using our website, you are agreeing to the use of Cookies. You can change your settings at any time. Cookie Policy.

Blog: Jill Dyché Subscribe to this blog's RSS feed!

Jill Dyché

There you are! What took you so long? This is my blog and it's about YOU.

Yes, you. Or at least it's about your company. Or people you work with in your company. Or people at other companies that are a lot like you. Or people at other companies that you'd rather not resemble at all. Or it's about your competitors and what they're doing, and whether you're doing it better. You get the idea. There's a swarm of swamis, shrinks, and gurus out there already, but I'm just a consultant who works with lots of clients, and the dirty little secret - shhh! - is my clients share a lot of the same challenges around data management, data governance, and data integration. Many of their stories are universal, and that's where you come in.

I'm hoping you'll pour a cup of tea (if this were another Web site, it would be a tumbler of single-malt, but never mind), open the blog, read a little bit and go, "Jeez, that sounds just like me." Or not. Either way, welcome on in. It really is all about you.

About the author >

Jill is a partner co-founder of Baseline Consulting, a technology and management consulting firm specializing in data integration and business analytics. Jill is the author of three acclaimed business books, the latest of which is Customer Data Integration: Reaching a Single Version of the Truth, co-authored with Evan Levy. Her blog, Inside the Biz, focuses on the business value of IT.

Editor's Note: More articles and resources are available in Jill's BeyeNETWORK Expert Channel. Be sure to visit today!

August 2006 Archives

In which Jill reminisces about her favorite trip every summer, remarks on Scott Humphrey's PR finesse, and shamelessly admires her new footwear. (Hint: They're not cute, but they're functional.)

Here's how much I look forward to Scott Humphrey's Pacific Northwest BI Summit every July: I have felt-soled wading shoes.

You're probably thinking "WTF?" but then you've never been fly fishing on the Rogue River on a toasty weekday afternoon up to your waist in rushing water while your blood pressure drops ten points and the Steelhead mock you from the murky depths and friends get quiet and take the fishing seriously, but only because they want to share a good dinner with you when the sun goes down. Hemingway should have had it so good.

Meanwhile your colleagues are hundreds of miles away in offices with fake plants cursing incoming e-mails while furtively specing out Bluetooth stereo eyewear online and wondering whose birthday they've forgotten and whether the salami sandwich is really worth the treck downstairs.

Scott Humphrey is a master at bringing smart people together and combining lively BI dialog with industrial fun. Like last year, myself, William McKnight, Claudia Imhoff, and Colin White presented on all things data-enabled while a stellar cast of vendor heavyweights, media luminaries, and market analysts looked on and weighed in. We discussed data integration (Colin), operational BI (Claudia), RFID (William), and CDI (moi). We forecast trends, colored in client experiences, debated priorities, and basically had some great conversations about the industry on- and off-line. We never came up with answers to thorny questions about the future of the ODS (me: moribund) or when exactly RFID would stretch system capacities (soon) whether MDM hubs should store history (no), but that wasn't really the point anyway.

This year's Sunday evening tequila was even better than the tequila I raved about in last year's blog. Don Julio 1942 is not a "shooting" tequila but a "sipping" tequila, smooth like brandy--but that didn't stop us. Transcendent stuff.

But for me the fishing is always the highlight. As the water rushed around my ankles, then my shins, then my waterproof shorts started doing their job, I knew I was back on the Rogue, and that fellowship in BI was good, but fellowship in fishing is holy. And, thanks to those waders and the support of good friends, I didn't slip once.

Posted August 29, 2006 12:21 PM
Permalink | No Comments |

In which Jill explains her view that the platform is a commodity, gets some dirty looks, then--more aptly than you might think--quotes Zsa Zsa Gabor's fifth husband. You'll just have to read it.

I regularly make the claim that BI is no longer about the platform, it's about the business capabilities it delivers. The BI tool vendors usually prick up their ears when they hear this, while some DBMS and hardware vendors make faces at me from behind their conference proceeding handouts.

Simply put, technological advances are blurring the lines between applications and databases, and the days of planning technology acquisitions according to some pre-fabricated IT logical architecture are long gone. Companies are different, their business requirements are unique, and platforms have become a commodity.

At Baseline, we have a concept called the BI Application Portfolio, which stresses that data (on a data warehouse, or in general) should evolve along with business capabilities, and that both should be deployed incrementally over time. We design these BI Portfolios for our clients, and no two are ever alike. This speaks to the distinct nature of companies' strategies, as well as the continuation of best-of-breed approaches to technology acquisition.

The good news about the commoditization of the platform is that IT professionals can spend less time and money sharpening specialized product skills. BI, data warehousing, and data management professionals don't have to spend precious time on configuration, maintenance, and "feeds and speeds" issues, and more time understanding data usage requirements, calculating ROI on existing or new applications, determining acceptable levels of detail, resolving what "right time" really means to the business, evaluating emerging software solutions, enhancing their evolving BI portfolios, and a host of other proactive work that can actually help their companies grow.

I don't usually plug vendors in my blogs, but I tend to listen more intently when they corroborate my points. I recently sat in on a briefing by StrataVia, a software firm whose product, Data Palette, standardizes and automates core database administration functions by availing a set of Standard Operating Procedures through a centralized DBA workbench. These automated SOPs allow DBAs to not only manage heterogeneous platforms, but to automate often-complex--thus, manually intensive--administration tasks. The ability to apply best practice operations to normally cumbersome work frees DBAs to focus on more proactive, business-centric work like gathering new user requirements or doing acceptance testing or tuning individual queries for performance. [Disclosure point: neither I nor my firm has any business or financial relationship with StrataVia.]

It's sort of like that old joke about Zsa Zsa Gabor's fifth husband, who once said, "I know what to do. I'm just not sure I can make it exciting." By focusing less on specific tools and generalized frameworks and instead optimizing our technology, data, and human assets, even DBA jobs can be exciting again!

Posted August 18, 2006 1:37 PM
Permalink | No Comments |

In which Jill gets away with using the phrase "our data sucks" yet again, explains why data quality is one (of many) components in the CDI stack, and a critical one at that, and starts to lay the groundwork for a topic she'll cover in upcoming blogs: CDI is operational, not analytical..

The more things change, the more they stay the same.

This saying, coined by the French, probably applies more to life than it does to IT, Moore's law being what it is. But every so often it's true here too. In 2000 Addison Wesley published my first book-e-Data. While certainly exploiting the "e" that prefaced almost everything in those days, e-Data was less about "electronic" data and more about "enterprise" data. In fact, the book explained how smart companies could leverage data across their various silo-ed systems to make better business decisions. e-Data profiled companies like Aetna, Bank of America, Hallmark, and Twentieth Century Fox, all doing great things with newly-integrated "e" data.

Back then the web was still big news. Business processes and workflow automation were de rigueur. As companies tried to figure out their new e-business infrastructures and protect themselves from hackers, information took a back seat. That is, until companies couldn't do what they needed to do with their data warehouses/CRM systems/packaged applications. Then they collectively turned their heads toward data. And they didn't like what they saw.

Problems with data-data quality, to be specific-turned out to be the most underestimated barrier to the success of these and other strategic systems. As IT analyst firms spewed statistics on CRM failures and CIO surveys bemoaned the high cost of implementation efforts, bad data became the proverbial albatross of systems integration projects. "We didn't know out data was dirty!" was the surprisingly surprised refrain from IT practitioners whose missed deadlines and cost overruns were attracting the attention of executives.

Enter Customer Data Integration (CDI). While there's still a lot of noise around the topic, three items about the emerging trend of CDI are definitive:

1. CDI takes the data warehouse one better in terms of enforcing a "single view" of the customer. It is the authoritative system about your company's customers, and it's operational, meaning it processes and propagates data to the systems that need it.
2. As such, CDI is broader than customer analytics. It's often recognizing customers in real-time, and deploying that information back to other systems and applications as it's needed.
3. Given the business problems CDI solves, data quality is "baked in" to the core functionality. This is why CDI tools are operational, and processing-intensive. Data matching, reconciliation, and standardization is hard work.

Consider the culmination CRM, data warehousing, real-time processing, and data cleansing rolled into a single solution that provides the so-called "golden record" about each customer. The sum of all its parts, CDI must enforce data accuracy and meaning natively, and continuously. CDI combines these proven capabilities into a new set of business competencies that could have a huge impact on customer loyalty, compliance levels, investment decisions-indeed, on the bottom line.

So, perhaps it's fitting to turn the French aphorism upside down: The more things stay the same, the more they change. For CDI at least, it's about time.

Posted August 9, 2006 7:31 PM
Permalink | No Comments |

In which Jill (and her co-author, Evan, by proxy) bemoan the state of data quality and call for a rapprochement.

Chapter 4 of my new CDI book, Customer Data Integration: Reaching a Single Version of the Truth (with Evan Levy) bemoans the poor state of data, particularly customer data, and I'm calling for a rapprochement.

That is, a meeting of the minds between the "process" crowd and the "technology" crowd when it comes to data quality. Both are well-intentioned. Both understand the impact bad data can have on a company and its ability to achieve strategic objectives. But both can tend to extremes when it comes to making their respective cases.

The technology people, for instance, come down hard on the side of automation. And don't assume that I'm just referring to the tool vendors here. You'd be surprised at how many internal IT people just want to "install" data quality--as if we ever could!--by immediately acquiring a tool and turning it on. "Voila! Clean data!" As if.

In contrast, the process people want us all to endure often-arduous design walkthroughs wherein stakeholders deconstruct the diversity of data cleansing steps. I've been to several of these sessions and the word "overkill" doesn't begin to describe them. Sort of reminds me of the scene in the movie Airplane! where a polite young woman chats with her seatmate non-stop, to the point where the poor guy ends up trying to commit suicide.

I like to compare data quality to the production support processes of old. These processes have evolved over the years. They didn't start as a highly rigorous and well defined process. Instead, they began with a specific, core set of functions that evolved as system requirements evolved--and as stuff broke.

In my experience, a data quality effort needs both a deliberate process and automated tools to be successful. I've seen drawn-out process meetings deliver in elegant workflows and root cause analyses that get rendered so much shelfware when a manager refuses to pony up budget money for a tool. ("No one said we'd have to automate it!") Sadly, it's much easier to do nothing at all than to implement a new process that mandates do-it-yourself (aka: manual) data cleansing.

Conversely, I've seen great data quality tools get underutilized because no one is really sure what to do or where to begin. ("Do we have to use data profiling?")

At Baseline, we like data quality pilots. These pilots are short but meaningful, and don't simply involve "flipping a switch." They explore the client's expectations for data improvement and accuracy, consider the viability of data quality as a service, force ownership and accountability discussions, and, yes, define high-level processes for determining the inputs, outputs, and rules for data cleansing. By establishing this level of clarity up-front, we can then automate data quality processes with an assurance of the desired outcome. No one is forced to languish in onerous process meetings. Moreover the results are clear and demonstrable to managers who have the budget to move forward with data quality on a larger scale.

When it comes to process versus technology, there can be a sharp delineation of approaches. But when launching a data quality effort, one without the other just won't cut it.

Posted August 3, 2006 7:24 PM
Permalink | 2 Comments |