Blog: Jill Dyché Subscribe to this blog's RSS feed!

Jill Dyché

There you are! What took you so long? This is my blog and it's about YOU.

Yes, you. Or at least it's about your company. Or people you work with in your company. Or people at other companies that are a lot like you. Or people at other companies that you'd rather not resemble at all. Or it's about your competitors and what they're doing, and whether you're doing it better. You get the idea. There's a swarm of swamis, shrinks, and gurus out there already, but I'm just a consultant who works with lots of clients, and the dirty little secret - shhh! - is my clients share a lot of the same challenges around data management, data governance, and data integration. Many of their stories are universal, and that's where you come in.

I'm hoping you'll pour a cup of tea (if this were another Web site, it would be a tumbler of single-malt, but never mind), open the blog, read a little bit and go, "Jeez, that sounds just like me." Or not. Either way, welcome on in. It really is all about you.

About the author >

Jill is a partner co-founder of Baseline Consulting, a technology and management consulting firm specializing in data integration and business analytics. Jill is the author of three acclaimed business books, the latest of which is Customer Data Integration: Reaching a Single Version of the Truth, co-authored with Evan Levy. Her blog, Inside the Biz, focuses on the business value of IT.

Editor's Note: More articles and resources are available in Jill's BeyeNETWORK Expert Channel. Be sure to visit today!

By Stephen Putman, Senior Consultant

Spreadsheet

The implementation of a new business intelligence system often requires the replication of existing reports in the new environment. In the process of designing, implementing and testing the new system, issues of data elements not matching existing output invariably come up. Many times, these discrepancies arise from data elements that are extrapolated from seemingly unrelated sources or calculations that are embedded in the reports themselves that often pre-date the tenure of the project team implementing the changes. How can you mitigate these issues in future implementations?

Issues of post-report data manipulation can range from simple - lack of documentation of the existing system - to complex and insidious - "spreadmarts" and stand-alone desktop databases that use the enterprise system for a data source, for example. It is also possible that source systems make changes to existing data and feeds that are not documented or researched by the project team. The result is the same - frustration from the business users and IT group in defining these outliers, not to mention the risk absorbed by the enterprise in using unmanaged data in reports that drive business decisions.

  The actions taken to correct the simple documentation issues center around organizational discipline:

  • Establish (or follow) a documentation standard for the entire organization, and stick to it!
  • Implement gateways in development of applications and reports that ensure that undocumented objects are not released to production
  • Perform periodic audits to ensure compliance

Reining in the other sources of undocumented data is a more complicated task. The data management organization has to walk a fine line between control of the data produced by the organization and curtailing the freedom of end users to respond to changing data requirements in their everyday jobs. The key is communication - the business users need to be encouraged to communicate data requirements into an easy-to-use system and understand the importance of sharing this information with the entire organization. If there is even a hint of disdain or punitive action regarding this communication, it will stop immediately, and these new derivations will remain a mystery until anther system is designed.

The modern information management environment is heading more and more towards transparency and accountability, which is being demanded by both internal and external constituencies. The well-documented reporting system supports this change in attitude to reduce risk in external reporting and increase confidence in the veracity of internal reports, allowing all involved to make better decisions and drive profitability of the business. It is a change whose time has come.

photo by r h via Flickr (Creative Commons License)


StevePutman_bw_100Stephen Putman has over 20 years experience supporting client/server and internet-based operations from small offices to major corporations.   He has extensive experience in a variety of front-end development tools, as well as relational database design and administration, and is extremely effective in project management and leadership roles. He is the co-author of The Data Governance eBook, available at information-management.com.



Posted December 21, 2010 6:00 AM
Permalink | No Comments |

By Stephen Putman, Senior Consultant

RabbitHole_xJasonRogersx
In my recent blog posting, "Metadata is Key," I talked about one way of changing the mindset of managers and implementers in support of the coming "semantic wave" of linked data management. Today, I give you another way to prepare for the coming revolution, and also become more disciplined and effective in your project management whether you're going down the semantic road or not...

  rathole (n) -  [from the English idiom ”down a rathole” for a waste of money or time] A technical subject that is known to be able to absorb infinite amounts of discussion time without more than an infinitesimal probability of arrival at a conclusion or consensus.

  Anyone who has spent time implementing computer systems knows exactly what I'm talking about here. Meetings can sometimes devolve into lengthy discussions that have little to do with the subject at hand. Frequently, these meetings become quite emotional, which makes it difficult to refocus the discussion on the meeting's subject. The end result is frustration felt by the project team on "wasting time" on unrelated subjects, with the resulting lack of clarity and potential for schedule overruns.

One method for mitigating this issue is the presence of a "rathole monitor" in each meeting. I was introduced to this concept at a client several years ago, and I was impressed by the focus they had in meetings, much to the project’s benefit. A "rathole monitor" is a person who does not actively participate in the meeting, but understands the scope and breadth of the proposed solution very well and has enough standing in the organization that they are trusted. This person listens to the discussion  in the meeting, and interrupts when he perceives that the conversion is veering off into an unrelated direction. It is important for this person to record this divergence and relay it to the project management team for later discussion - the discussion is usually useful to the project, and if these new ideas are not addressed later, people will keep these ideas to themselves, which could be detrimental to the project.

  This method will pay dividends in current project management, but how does it relate to semantics and linked data? Semantic technology is all about context and relationships of data objects - in fact, without these objects and relationships being well defined, semantic processing  is impossible.  Therefore, developing a mindset of scope and context is essential to the successful implementation of any semantically enabled application. Training your staff to think in these terms makes your organization perform in a more efficient and focused manner, which will surely lead to increased profitability and more effective operations.

photo by xJasonRogersx via Flickr (Creative Commons License)


StevePutman_bw_100Stephen Putman has over 20 years experience supporting client/server and internet-based operations from small offices to major corporations.   He has extensive experience in a variety of front-end development tools, as well as relational database design and administration, and is extremely effective in project management and leadership roles. He is the co-author of The Data Governance eBook, available at information-management.com.



Posted December 16, 2010 6:00 AM
Permalink | No Comments |

By Stephen Putman, Senior Consultant

MetaDataKey-Brenda-Starr
One of the most promising developments in data management over the last ten years is the rise of semantic processing, commonly referred to as the "Semantic Web." Briefly described, semantic processing creates a "web of data" complimenting the "web of documents" of the World Wide Web. The benefits of such an array of linked data are many, but the main benefit could the ability for machines to mine for needed data to enhance searches, recommendations, and the like, where humans do this now.

Unfortunately, the growth of the semantic data industry has been slower than anticipated, mainly due to a "chicken and egg" problem - the systems needs descriptive metadata to be added to existing structures to function efficiently, but major data management companies are reluctant to invest a great deal into creating tools to do this until an appropriate return on investment is demonstrated. I feel that there is an even more basic issue with the adoption of semantics that has nothing to do with tools or investment - we need the implementers and managers of data systems to change their thinking about how they do their jobs; to make metadata production central to the systems they produce.

The interoperability and discoverability of data is becoming an increasingly important requirements for organizations of all types - the financial industry is keenly aware of the requirements of reporting systems that are XBRL-enabled, for example. If we leave external requirements to the side, the same requirements can benefit the internal reporting of the organization as well. Reporting systems go through extended periods of design and implementation, with their contents and design a seemingly well-guarded secret. Consequently, effort is required for departments not originally included in the system design to discover and use appropriate data for their operations.

The organization and publication of metadata about these reporting systems can mitigate the cost of this discovery and use by the entire organization. Here is a sample of the metadata produced by every database system, either formally or informally:

  • System-schema-table-column
  • Frequency of update
  • Input source(s)
  • Ownership-stewardship
  • Security level

The collection and publication of such metadata in  standard forms  will prepare your organization for the coming ”semantic wave," even if you do not have a specific application that can utilize this data at the present time. This will give your organization an advantage over those companies that wait for these requirements to be implemented and will need to play catch-up. You will also gain the advantage of your staff thinking in terms of metadata capture and dissemination, which will help your company become more efficient in its data management functions.

photo by ~Brenda-Starr~ via Flickr (Creative Commons License)


StevePutman_bw_100Stephen Putman has over 20 years experience supporting client/server and internet-based operations from small offices to major corporations.   He has extensive experience in a variety of front-end development tools, as well as relational database design and administration, and is extremely effective in project management and leadership roles. He is the co-author of The Data Governance eBook, available at information-management.com.



Posted December 14, 2010 6:00 AM
Permalink | No Comments |

Squeeze_fit
By Stephen Putman, Senior Consultant

I've spent the last eighteen months at clients that have aging technology infrastructures and are oriented to build applications as opposed to buying more integrated software packages. All of these organizations face a decision which is similar to the famed "build vs. buy" decision that is made when implementing a new enterprise computer system - do we acquire new technology to fulfill requirements, or adapt our existing systems to accomplish business goals?

Obviously, there are pros and cons to each approach, and external factors such as enterprise architecture requirements and resource constraints factor into the decision. However, there are considerations independent of those constraints whose answers may guide you to a more effective decision. These considerations are the subject of this article.

Ideally, there would not be a decision to make here at all - your technological investments are well managed, up-to-date, and flexible enough to adapt easily to new requirements. Unfortunately, this is rarely the case in most organizations. Toolsets are cobbled together from developer biases (from previous experience), enterprise standards, or inclusion of OEM packages with larger software packages such as ERP systems or packaged data warehouses. New business requirements often appear that do not fit neatly into this environment, which makes this decision necessary.

Aquire New

The apparent path of least resistance in addressing new business requirements is to purchase specialized packages that solve tactical issues well. This approach has the benefit of being the solution that would most closely fit the requirements at hand. However, the organization runs the risk of gathering a collection of ill-fitting software packages that could have difficulty solving future requirements. The best that can be hoped for in this scenario is that the organization leans toward obtaining tools that are based on a standardized foundation of technology such as Java. This enables future customization if necessary and ensures that there will be resources available to do the future work without substantial retraining.

Modify Existing Tools

The far more common approach to this dilemma is to adapt existing software tools to the new business requirements. The advantage to this approach is that your existing staff is familiar with the toolset and can adapt it to the given application without retraining. The main challenge in this approach is that the organization must weigh the speed of adaptation against the possible inefficiency of the tools in the given scenario and the inherent instability of asking a toolset to do things that it was not designed to do.

The "modify existing" approach has become much more common in the last ten to twenty years because of budgetary constraints imposed upon the departments involved. Unless you work in a technology company in the commercial product development group, your department is likely perceived as a cost center to the overall organization, not a profit center, which means that money spent on your operations is an expense instead of an investment. Therefore, you are asked to cut costs wherever possible, and technical inefficiencies are tolerated to a greater degree. This means that you may not have the opportunity to acquire new technology even if it makes the most sense.

The decision to acquire new technology or extend existing technology to satisfy new business requirements is often a decision between unsatisfactory alternatives. The best way for an organization to make effective decisions given all of the constraints is to base its purchase decisions on standardized software platforms. This way, you have the maximum flexibility when the decision falls to the "modify existing" option.

photo by orijinal via Flickr (Creative Commons License)


StevePutman_bw_100Stephen Putman has over 20 years experience supporting client/server and internet-based operations from small offices to major corporations.   He has extensive experience in a variety of front-end development tools, as well as relational database design and administration, and is extremely effective in project management and leadership roles. He is the co-author of The Data Governance eBook, available at information-management.com.



Posted December 10, 2010 6:00 AM
Permalink | No Comments |

By Mary Anne Hopper, Senior Consultant

WhereWorkComesFrom
I’ve written quite a bit about the importance of establishing rigor around the process of project intake and prioritization.   If you’re sitting there wondering how to even get started, I believe it is important to understand where it is these different work requests because unlike application development projects, BI projects tend to have touch points across the organization.   I tend to break the sources into three main categories—stand-alone, upstream applications, and enhancements.

Stand-alone BI projects are those that are not driven by new source system development. Project types can include as new data marts, reporting applications, or even re-architecting legacy reporting environments. Application projects are driven by changes in any of the upstream source systems we utilize in the BI environment including new application development and changes to existing applications. Always remember that the smallest of changes in a source system can have the largest of impacts to the downstream BI application environment. The enhancements category is the catch-all for low risk development that can be accomplished in a short amount of time.

Just as important in understanding where work requests come from is prioritizing those work requests.     The three need to be considered in the same prioritization queue—this is a step that challenges a lot of the clients I work with.   So, why is it so important to prioritize work together?   The first reason is resource availability.   Resource impact points include project resources (everyone from the analysts to the developers to the testers to the business customers), environment availability and capacity (development and test), and release schedules.   And, most importantly—prioritizing all work together ensures the business is getting their highest value projects completed first.

  


MAHopper_BWMary Anne has 15 years of experience as a data management professional in all aspects of successful delivery of data solutions to support business needs.   She has worked in the capacity of both project manager and business analyst to lead business and technical project teams through data warehouse/data mart implementation, data integration, tool selection and implementation, and process automation projects.


Posted November 2, 2010 6:00 AM
Permalink | No Comments |

1 2 NEXT