<![CDATA[Blog: Jill Dyché]]> http://www.b-eye-network.com/blogs/dyche/ Copyright 2012 Tue, 22 Feb 2011 06:00:00 -0700 http://www.movabletype.org/?v=4.261 http://blogs.law.harvard.edu/tech/rss The Dreaded Stairs

By Stephen Putman, Senior Consultant

Stairs_robinfensom
Recently, a friend of mine posted a link on Facebook that reinforced a philosophy that I have had for a long time that applies to all activities in life that are not duty-bound:

The Dreaded Stairs (part of  The Fun Theory project)

I have long felt that humans do things for two reasons:

A) They're fun

B) They're lucrative

This applies to the field of Data Governance and Quality as it does everything else. One of the reasons data governance and quality initiatives are not more widely adopted and followed is that the work is not terribly fun - data owners must be identified, policies and processes must be adopted, and the entire process must be monitored and attended once it is in place. It's also not seen as lucrative in a direct sense - the act of cleansing the data in a transaction usually doesn't provide immediate financial reward, and while the implementation of governance and quality initiatives can affect the company's bottom line, the benefits are very difficult to quantify in a traditional sense.

Phil Simon  has produced a terrific  series  for The Data Roundtable on incentive ideas for data quality programs, so I will not address these here - he says it much better than I can. I am concerned with "fun." The video above demonstrated an innovative idea to make a mundane but healthy activity (climbing stairs) into a joyful experience. What sort of innovative programs can be created to make managing high-quality data fun?

"Fun" is a difficult concept because it means something different to everyone. One way to find out what is "fun" to your employees is by conducting surveys or workshops to ask them directly. Another possibility could be to have a "company carnival" in your parking lot, and award employees who identify quality issues with raffle tickets or a "boss' dunk tank." The White House holds a  yearly contest  with government employees for the best quality improvement or cost-savings idea (this is more of an incentive, but some people also consider contests like this fun).

These are just a few ideas off the top of the head - do you have creative people who can come up with other ideas? If it is indeed true that fun makes unpleasant activities more palatable, this would be time well spent to reinforce data governance and quality in your organization.

photo by Robin Fensom via Flickr (Creative Commons license)


StevePutman_bw_100Stephen Putman has over 20 years experience supporting client/server and internet-based operations from small offices to major corporations.   He has extensive experience in a variety of front-end development tools, as well as relational database design and administration, and is extremely effective in project management and leadership roles. He is the co-author of The Data Governance eBook, available at baseline-consulting.com/ebooks.


]]>
http://www.b-eye-network.com/blogs/dyche/archives/2011/02/the_dreaded_sta.php http://www.b-eye-network.com/blogs/dyche/archives/2011/02/the_dreaded_sta.php Tue, 22 Feb 2011 06:00:00 -0700
Three-Dimensional Chess

By Stephen Putman, Senior Consultant

Spock-chess
I recently read Rob Gonzalez' blog post  I've Got a Federated Bridge to Sell You (A Defense of the Warehouse)  with great interest - a Semantic Web professional who is defending a technology that could be displaced by semantics! I agree with Mr. Gonzalez that semantically federated databases are not the answer in all business cases. However, traditional data warehouses and data marts are not the best answer in all cases either, and there are also cases where neither technology is the appropriate solution.

The appropriate technological solution for a given business case depends on a great many factors, which I like to call "Three-Dimensional Chess."

An organization needs to consider many factors in choosing the right technology to solve an analytical requirement, including:

  • Efficiency/speed of query return - Is the right data stored or accessed in an efficient manner, and can it be accessed quickly and accurately?  
  • Currency of data - How current is the data that is available?  
  • Flexibility of model - Can the system accept new data inputs of differing structures with a minimum of remodeling and recoding?  
  • Implementation cost, including maintenance - How much does it cost to implement and maintain the system?  
  • Ease of use by end users - Can the data be accessed and manipulated by end users in familiar tools without damage to the underlying data set?  
  • Relative fit to industry and organizational standards - This deals with long-term maintainability of the system, which I addressed in a recent posting –  Making It Fit.
  • Current staff skillsets/scarcity of resources to implement and maintain - Can your staff implement and maintain the system, or alternately, can you find the necessary resources in the market to do so at a reasonable cost?

Fortunately, new tools and methodologies are constantly being developed that can optimize one or more of these factors, but balancing all of these sometimes mutually exclusive factors is a very difficult job. There are very few system architects who are well versed in many of the applicable systems, so architects tend to advocate the types of systems they are familiar with, bending requirements to fit the characteristics of the system. This causes the undesirable tendency that is represented in the saying, "When all you have is a hammer, everything looks like a nail."

Make sure that your organization is taking all factors into account when deciding how to solve an analytical requirement by developing or attracting people who are skilled at playing ”three-dimensional chess.”

  


StevePutman_bw_100Stephen Putman has over 20 years experience supporting client/server and internet-based operations from small offices to major corporations.   He has extensive experience in a variety of front-end development tools, as well as relational database design and administration, and is extremely effective in project management and leadership roles. He is the co-author of The Data Governance eBook, available at baseline-consulting.com/ebooks.


]]>
http://www.b-eye-network.com/blogs/dyche/archives/2011/02/three-dimension.php http://www.b-eye-network.com/blogs/dyche/archives/2011/02/three-dimension.php Wed, 16 Feb 2011 06:00:00 -0700
Linked Data Today!

By Stephen Putman, Senior Consultant

Chainlink_steve_lodefink

I begin today with an invitation to a headache...click this link:  The Linking Open Data Cloud Diagram

Ouch! That is a really complicated diagram. I believe that the  Semantic Web  suffers from the same difficulty that many worthy technologies do - the relative impossibility to describe the concept in simple terms, using concepts familiar to the vast majority of the audience. When this happens, the technology gets buried under well-meaning but hopelessly complex diagrams like this one. If you take the time to understand it, the concept is very powerful, but all the circles and lines immediately turn off most people.

Fortunately, there are simple things that you can do in your organization today that will introduce the concept of  linked data  to your staff and begin to leverage the great power that the concept holds. It will take a little bit of transition, but once the idea takes hold you can take it in several more powerful directions.

Many companies treat their applications as islands unto themselves in their basic operations, regardless of any external feeds or reporting that occurs. One result of this is that basic, seldom-changing concepts such as Country, State, and Date/Time are replicated in each system throughout the company. A basic tenet of data management states that managing data in one place is preferable to managing it in several - every time something changes, it must be maintained in however many systems use it.

One of the basic concepts of linked data is that applications will use a common repository for data like State, for example, and publish  Uniform Resource Identifiers  (URIs), or standardized location values that act much like Web-based URLs, for each value in the repository. Applications will then link to the URI for the lookup value instead of proprietary codes in use today. There are efforts to make global shared repositories for this type of data, but it is not necessary to place your trust in these data stores right away - all of this can occur within your company's firewall.

The transition to linked data does not need to be sudden or comprehensive, but can be accomplished in an incremental fashion to mitigate disruption to existing systems. Here are actions that you can begin right now to start the transition:

  • If you are coding an application that uses these common lookups, store the URI in the parent table instead of the proprietary code.
  • If you are using "shrink wrap" applications, construct views that reconcile the URIs and the proprietary codes, and encourage their use by end users.
  • Investigate usage of common repositories in all future development and packaged software acquisition.
  • Begin investigation of linking company-specific common data concepts, such as department, location, etc.

  Once the transition to a common data store is under way, your organization will have lower administration costs and more consistent data throughout the company. You will also be leading your company into the future of linked data processing that is coming soon.

photo by steve_lodefink via Flickr (Creative Commons License)


StevePutman_bw_100Stephen Putman has over 20 years experience supporting client/server and internet-based operations from small offices to major corporations.   He has extensive experience in a variety of front-end development tools, as well as relational database design and administration, and is extremely effective in project management and leadership roles. He is the co-author of The Data Governance eBook, available at information-management.com.


]]>
http://www.b-eye-network.com/blogs/dyche/archives/2011/02/linked_data_tod.php http://www.b-eye-network.com/blogs/dyche/archives/2011/02/linked_data_tod.php Tue, 01 Feb 2011 06:00:00 -0700
Succeed Despite Failing

By Stephen Putman, Senior Consultant

Netflix_PseudoGil
I just finished reading a post on the Netflix blog - 5 Lessons We've Learned Using Amazon Web Services (AWS). Even though this article is specific to a high-traffic cloud-based technology platform, I think that it holds a great lesson for the optimization of any computer system, and especially a system that relies on outside sources such as a business intelligence system.  

Netflix develops their systems with the attitude that anything can fail at any point in the technology stack, and their systems should respond in as graceful a way as possible. This is a wonderful attitude to have for any system, and their lessons can be applied to a BI system just as easily:

1. You must unlearn what you have learned. Many people who develop and maintain BI systems come from the transactional application world, and apply their experience to a BI system, which is fundamentally different in several ways - for example, the optimization goal of a transactional system is the individual transaction, while the optimization point of a BI system is the retrieval and manipulation of often huge data sets. Managers and developers that do not realize these differences are doomed to failure with their systems, while people who  successfully  make the transition meet organizational goals much more easily.

2. Co-tenancy is hard. The BI system must manage many different types of loads and requests on a daily basis while simultaneously appearing to be as responsive to the user as all other software used. The system administrator must balance data loads, operational reporting requests, and the construction and manipulation of analysis data sets, often at the same time. This is the same sort of paradigm shift as in lesson 1 - people who do not realize the complications of this environment are doomed to failure since the success of a BI system is directly proportional to the frequency of use, and an inefficient system quickly becomes unused.

3. The best way to avoid failure is to fail constantly. This lesson seems counter-intuitive, but I've seen a lot of failed systems that always assumed that things would work perfectly - source feeds would always have valid data, in the same place, at the same time, always - that this philosophy gains more credence daily. Systems should always be tested for outages at any step of the process, and coded so that the response is graceful and as invisible to end-users as possible. If you don't rehearse this in development, you will fail in production - take that to the bank.

4. Learn with real scale, not toy models. It would seem that proper performance testing on systems equivalent to production hardware and networking with full data sets would be self-evident, but many development shops look at this as an unnecessary expense that adds little to the finished product. But, as in lesson 3 above, if you do not rehearse the operation of your system on the same size of system as your production environment, you have no way of knowing how the system will respond in real-world situations, and are effectively gambling with your career. The smart manager avoids this sort of gamble.

5. Commit yourself. This message surfaces in many different discussions, but it should be re-emphasized frequently - a system as important as your enterprise business intelligence system should have strong and unwavering commitment from all levels of your organization to survive the inevitable struggles that occur in the implementation of such a large computer system.

It is sometimes surprising to realize that even though technology continues to become more complex and distributed, the same simple lessons can be learned from every system and applied to new systems. These lessons should be reviewed frequently in your quest to implement successful data processing systems.

photo by PseudoGil via Flickr (Creative Commons License)


StevePutman_bw_100Stephen Putman has over 20 years experience supporting client/server and internet-based operations from small offices to major corporations.   He has extensive experience in a variety of front-end development tools, as well as relational database design and administration, and is extremely effective in project management and leadership roles. He is the co-author of The Data Governance eBook, available at information-management.com.


]]>
http://www.b-eye-network.com/blogs/dyche/archives/2011/01/succeed_despite.php http://www.b-eye-network.com/blogs/dyche/archives/2011/01/succeed_despite.php Tue, 18 Jan 2011 06:00:00 -0700
New Years' Resolutions: Assess and Revise Your BI Strategy

By Dick Voorhees, Senior Consultant

Champagne
The New Year is upon us. And for many, the coming of the New Year involves making new resolutions, or reaffirming old ones. This resolution-making process includes corporations and organizations, not just individuals. In terms of personal resolutions, some undertake this process in earnest, but many seem to deal with resolutions superficially, or at least not very effectively. The same is frequently true for organizations as well.

So how then should an organization go about deciding which ”resolutions” to pursue in the New Year, which goals and objectives are both worthy and achievable? Often there are no "good" or "bad" opportunities, a priori, but some are more likely to result in a successful outcome and/or have more significant payoff than others.

  1. Take stock of the opportunities, and develop a list of key potential initiatives (or review the existing list, if one exists). Consider recent or imminent changes in the marketplace, competitors’ actions, and governmental regulations. Which of these initiatives offers the possibility of consolidating/increasing market share, improving customer service, or represents necessary future investment (in the case of regulations)? And which best supports the existing goals and objectives of the organization?
  2. Assess the capabilities and readiness of the organization to act on these initiatives. An opportunity might be a significant one, but if the organization can’t respond effectively and in a timely manner, then the opportunity will be lost, and the organization might better focus its attention and resources on another opportunity with lesser potential payback, but that has a much greater chance of success.
  3. Develop a roadmap, a tactical plan, for addressing the opportunity. Determine which resources are required – hardware, software, capital, and most importantly people – what policies and procedures must be defined or changed, etc...

Then be prepared to act! Sometimes the best intentions for the New Year fail not for lack of thought or foresight, but for lack of effective follow through. Develop the proper oversight/governance mechanisms, put the plan into action, and then make sure to monitor progress on a regular basis.

These are not difficult steps to follow, but organizations sometimes need help doing so. We’ve found that clients who call us have learned the hard way – either directly or through stories they’ve heard in their industries – that some careful planning, deliberate program design, and – if necessary – some skill assessment and training can take them a long way in their resolutions for success in 2011. Good luck!

photo by L.C.Nøttaasen via Flickr (Creative Commons)

  


DVoorhees_50_bw Dick Voorhees is a seasoned technology professional with more than 25 years of experience in information technology, data integration, and business analytic systems. He is highly skilled at working with and leading mixed teams of business stakeholders and technologists on data enabling projects.

]]>
http://www.b-eye-network.com/blogs/dyche/archives/2011/01/new_years_resol.php http://www.b-eye-network.com/blogs/dyche/archives/2011/01/new_years_resol.php Tue, 11 Jan 2011 06:00:00 -0700
Do You Know What Your Reports Are Doing?

By Stephen Putman, Senior Consultant

Spreadsheet

The implementation of a new business intelligence system often requires the replication of existing reports in the new environment. In the process of designing, implementing and testing the new system, issues of data elements not matching existing output invariably come up. Many times, these discrepancies arise from data elements that are extrapolated from seemingly unrelated sources or calculations that are embedded in the reports themselves that often pre-date the tenure of the project team implementing the changes. How can you mitigate these issues in future implementations?

Issues of post-report data manipulation can range from simple - lack of documentation of the existing system - to complex and insidious - "spreadmarts" and stand-alone desktop databases that use the enterprise system for a data source, for example. It is also possible that source systems make changes to existing data and feeds that are not documented or researched by the project team. The result is the same - frustration from the business users and IT group in defining these outliers, not to mention the risk absorbed by the enterprise in using unmanaged data in reports that drive business decisions.

  The actions taken to correct the simple documentation issues center around organizational discipline:

  • Establish (or follow) a documentation standard for the entire organization, and stick to it!
  • Implement gateways in development of applications and reports that ensure that undocumented objects are not released to production
  • Perform periodic audits to ensure compliance

Reining in the other sources of undocumented data is a more complicated task. The data management organization has to walk a fine line between control of the data produced by the organization and curtailing the freedom of end users to respond to changing data requirements in their everyday jobs. The key is communication - the business users need to be encouraged to communicate data requirements into an easy-to-use system and understand the importance of sharing this information with the entire organization. If there is even a hint of disdain or punitive action regarding this communication, it will stop immediately, and these new derivations will remain a mystery until anther system is designed.

The modern information management environment is heading more and more towards transparency and accountability, which is being demanded by both internal and external constituencies. The well-documented reporting system supports this change in attitude to reduce risk in external reporting and increase confidence in the veracity of internal reports, allowing all involved to make better decisions and drive profitability of the business. It is a change whose time has come.

photo by r h via Flickr (Creative Commons License)


StevePutman_bw_100Stephen Putman has over 20 years experience supporting client/server and internet-based operations from small offices to major corporations.   He has extensive experience in a variety of front-end development tools, as well as relational database design and administration, and is extremely effective in project management and leadership roles. He is the co-author of The Data Governance eBook, available at information-management.com.


]]>
http://www.b-eye-network.com/blogs/dyche/archives/2010/12/do_you_know_wha.php http://www.b-eye-network.com/blogs/dyche/archives/2010/12/do_you_know_wha.php Tue, 21 Dec 2010 06:00:00 -0700
Keep It On Track

By Stephen Putman, Senior Consultant

RabbitHole_xJasonRogersx
In my recent blog posting, "Metadata is Key," I talked about one way of changing the mindset of managers and implementers in support of the coming "semantic wave" of linked data management. Today, I give you another way to prepare for the coming revolution, and also become more disciplined and effective in your project management whether you're going down the semantic road or not...

  rathole (n) -  [from the English idiom ”down a rathole” for a waste of money or time] A technical subject that is known to be able to absorb infinite amounts of discussion time without more than an infinitesimal probability of arrival at a conclusion or consensus.

  Anyone who has spent time implementing computer systems knows exactly what I'm talking about here. Meetings can sometimes devolve into lengthy discussions that have little to do with the subject at hand. Frequently, these meetings become quite emotional, which makes it difficult to refocus the discussion on the meeting's subject. The end result is frustration felt by the project team on "wasting time" on unrelated subjects, with the resulting lack of clarity and potential for schedule overruns.

One method for mitigating this issue is the presence of a "rathole monitor" in each meeting. I was introduced to this concept at a client several years ago, and I was impressed by the focus they had in meetings, much to the project’s benefit. A "rathole monitor" is a person who does not actively participate in the meeting, but understands the scope and breadth of the proposed solution very well and has enough standing in the organization that they are trusted. This person listens to the discussion  in the meeting, and interrupts when he perceives that the conversion is veering off into an unrelated direction. It is important for this person to record this divergence and relay it to the project management team for later discussion - the discussion is usually useful to the project, and if these new ideas are not addressed later, people will keep these ideas to themselves, which could be detrimental to the project.

  This method will pay dividends in current project management, but how does it relate to semantics and linked data? Semantic technology is all about context and relationships of data objects - in fact, without these objects and relationships being well defined, semantic processing  is impossible.  Therefore, developing a mindset of scope and context is essential to the successful implementation of any semantically enabled application. Training your staff to think in these terms makes your organization perform in a more efficient and focused manner, which will surely lead to increased profitability and more effective operations.

photo by xJasonRogersx via Flickr (Creative Commons License)


StevePutman_bw_100Stephen Putman has over 20 years experience supporting client/server and internet-based operations from small offices to major corporations.   He has extensive experience in a variety of front-end development tools, as well as relational database design and administration, and is extremely effective in project management and leadership roles. He is the co-author of The Data Governance eBook, available at information-management.com.


]]>
http://www.b-eye-network.com/blogs/dyche/archives/2010/12/keep_it_on_trac.php http://www.b-eye-network.com/blogs/dyche/archives/2010/12/keep_it_on_trac.php Thu, 16 Dec 2010 06:00:00 -0700
Metadata is Key

By Stephen Putman, Senior Consultant

MetaDataKey-Brenda-Starr
One of the most promising developments in data management over the last ten years is the rise of semantic processing, commonly referred to as the "Semantic Web." Briefly described, semantic processing creates a "web of data" complimenting the "web of documents" of the World Wide Web. The benefits of such an array of linked data are many, but the main benefit could the ability for machines to mine for needed data to enhance searches, recommendations, and the like, where humans do this now.

Unfortunately, the growth of the semantic data industry has been slower than anticipated, mainly due to a "chicken and egg" problem - the systems needs descriptive metadata to be added to existing structures to function efficiently, but major data management companies are reluctant to invest a great deal into creating tools to do this until an appropriate return on investment is demonstrated. I feel that there is an even more basic issue with the adoption of semantics that has nothing to do with tools or investment - we need the implementers and managers of data systems to change their thinking about how they do their jobs; to make metadata production central to the systems they produce.

The interoperability and discoverability of data is becoming an increasingly important requirements for organizations of all types - the financial industry is keenly aware of the requirements of reporting systems that are XBRL-enabled, for example. If we leave external requirements to the side, the same requirements can benefit the internal reporting of the organization as well. Reporting systems go through extended periods of design and implementation, with their contents and design a seemingly well-guarded secret. Consequently, effort is required for departments not originally included in the system design to discover and use appropriate data for their operations.

The organization and publication of metadata about these reporting systems can mitigate the cost of this discovery and use by the entire organization. Here is a sample of the metadata produced by every database system, either formally or informally:

  • System-schema-table-column
  • Frequency of update
  • Input source(s)
  • Ownership-stewardship
  • Security level

The collection and publication of such metadata in  standard forms  will prepare your organization for the coming ”semantic wave," even if you do not have a specific application that can utilize this data at the present time. This will give your organization an advantage over those companies that wait for these requirements to be implemented and will need to play catch-up. You will also gain the advantage of your staff thinking in terms of metadata capture and dissemination, which will help your company become more efficient in its data management functions.

photo by ~Brenda-Starr~ via Flickr (Creative Commons License)


StevePutman_bw_100Stephen Putman has over 20 years experience supporting client/server and internet-based operations from small offices to major corporations.   He has extensive experience in a variety of front-end development tools, as well as relational database design and administration, and is extremely effective in project management and leadership roles. He is the co-author of The Data Governance eBook, available at information-management.com.


]]>
http://www.b-eye-network.com/blogs/dyche/archives/2010/12/metadata_is_key.php http://www.b-eye-network.com/blogs/dyche/archives/2010/12/metadata_is_key.php Tue, 14 Dec 2010 06:00:00 -0700
Making It Fit

Squeeze_fit
By Stephen Putman, Senior Consultant

I've spent the last eighteen months at clients that have aging technology infrastructures and are oriented to build applications as opposed to buying more integrated software packages. All of these organizations face a decision which is similar to the famed "build vs. buy" decision that is made when implementing a new enterprise computer system - do we acquire new technology to fulfill requirements, or adapt our existing systems to accomplish business goals?

Obviously, there are pros and cons to each approach, and external factors such as enterprise architecture requirements and resource constraints factor into the decision. However, there are considerations independent of those constraints whose answers may guide you to a more effective decision. These considerations are the subject of this article.

Ideally, there would not be a decision to make here at all - your technological investments are well managed, up-to-date, and flexible enough to adapt easily to new requirements. Unfortunately, this is rarely the case in most organizations. Toolsets are cobbled together from developer biases (from previous experience), enterprise standards, or inclusion of OEM packages with larger software packages such as ERP systems or packaged data warehouses. New business requirements often appear that do not fit neatly into this environment, which makes this decision necessary.

Aquire New

The apparent path of least resistance in addressing new business requirements is to purchase specialized packages that solve tactical issues well. This approach has the benefit of being the solution that would most closely fit the requirements at hand. However, the organization runs the risk of gathering a collection of ill-fitting software packages that could have difficulty solving future requirements. The best that can be hoped for in this scenario is that the organization leans toward obtaining tools that are based on a standardized foundation of technology such as Java. This enables future customization if necessary and ensures that there will be resources available to do the future work without substantial retraining.

Modify Existing Tools

The far more common approach to this dilemma is to adapt existing software tools to the new business requirements. The advantage to this approach is that your existing staff is familiar with the toolset and can adapt it to the given application without retraining. The main challenge in this approach is that the organization must weigh the speed of adaptation against the possible inefficiency of the tools in the given scenario and the inherent instability of asking a toolset to do things that it was not designed to do.

The "modify existing" approach has become much more common in the last ten to twenty years because of budgetary constraints imposed upon the departments involved. Unless you work in a technology company in the commercial product development group, your department is likely perceived as a cost center to the overall organization, not a profit center, which means that money spent on your operations is an expense instead of an investment. Therefore, you are asked to cut costs wherever possible, and technical inefficiencies are tolerated to a greater degree. This means that you may not have the opportunity to acquire new technology even if it makes the most sense.

The decision to acquire new technology or extend existing technology to satisfy new business requirements is often a decision between unsatisfactory alternatives. The best way for an organization to make effective decisions given all of the constraints is to base its purchase decisions on standardized software platforms. This way, you have the maximum flexibility when the decision falls to the "modify existing" option.

photo by orijinal via Flickr (Creative Commons License)


StevePutman_bw_100Stephen Putman has over 20 years experience supporting client/server and internet-based operations from small offices to major corporations.   He has extensive experience in a variety of front-end development tools, as well as relational database design and administration, and is extremely effective in project management and leadership roles. He is the co-author of The Data Governance eBook, available at information-management.com.


]]>
http://www.b-eye-network.com/blogs/dyche/archives/2010/12/making_it_fit.php http://www.b-eye-network.com/blogs/dyche/archives/2010/12/making_it_fit.php Fri, 10 Dec 2010 06:00:00 -0700
Understanding Where Our Work Comes From

By Mary Anne Hopper, Senior Consultant

WhereWorkComesFrom
I’ve written quite a bit about the importance of establishing rigor around the process of project intake and prioritization.   If you’re sitting there wondering how to even get started, I believe it is important to understand where it is these different work requests because unlike application development projects, BI projects tend to have touch points across the organization.   I tend to break the sources into three main categories—stand-alone, upstream applications, and enhancements.

Stand-alone BI projects are those that are not driven by new source system development. Project types can include as new data marts, reporting applications, or even re-architecting legacy reporting environments. Application projects are driven by changes in any of the upstream source systems we utilize in the BI environment including new application development and changes to existing applications. Always remember that the smallest of changes in a source system can have the largest of impacts to the downstream BI application environment. The enhancements category is the catch-all for low risk development that can be accomplished in a short amount of time.

Just as important in understanding where work requests come from is prioritizing those work requests.     The three need to be considered in the same prioritization queue—this is a step that challenges a lot of the clients I work with.   So, why is it so important to prioritize work together?   The first reason is resource availability.   Resource impact points include project resources (everyone from the analysts to the developers to the testers to the business customers), environment availability and capacity (development and test), and release schedules.   And, most importantly—prioritizing all work together ensures the business is getting their highest value projects completed first.

  


MAHopper_BWMary Anne has 15 years of experience as a data management professional in all aspects of successful delivery of data solutions to support business needs.   She has worked in the capacity of both project manager and business analyst to lead business and technical project teams through data warehouse/data mart implementation, data integration, tool selection and implementation, and process automation projects.

]]>
http://www.b-eye-network.com/blogs/dyche/archives/2010/11/understanding_w.php http://www.b-eye-network.com/blogs/dyche/archives/2010/11/understanding_w.php Tue, 02 Nov 2010 06:00:00 -0700
The Price of a Tube of Toothpaste

By Mary Anne Hopper, Senior Consultant

Toothpaste_by_cogdogblog
As you can imagine, I travel quite a bit as a consultant for Baseline.   Over my tenure, I have developed a standard routine for getting through the airport.   More often than not, things have gone pretty smoothly for me.   Until this week – my bag was pushed into the extra screening area where it turned out there was an over-sized tube of toothpaste that had to be thrown away.     How did this happen when week in and week out, I use the same bag for my stuff and always get through without a hitch?   Well, I deviated from my process.

You see, the prior week I actually checked a bag and was able to throw a full tube of toothpaste in the ditty bag and I never checked when I was packing for this week’s trip.   I deviated from my standard process.   If you’ve ever implemented a ”small” or ”low impact” change that has blown up an ETL job, changed the meaning of a field, or caused a report to return improper results, you know where I’m going with this.

Process is important.   Discipline in implementing to that process is even more important.   Am I proposing that every small change go through an entire full-blown project lifecycle?   Absolutely not.   But, there should be a reasonable life cycle for everything that goes into a production quality environment.   Taking consistent steps in delivery helps to ensure that even the smallest of changes do not result in high impact outages.   This can be achieved by taking the time to analyze, develop, and then test changes prior to implementation.   What that right level of rigor is depends on the impact of the environment being unavailable or incorrect.

So, what did I learn from my experience with the tooth paste?   My deviation only cost me about $3.50, some embarrassment in the TSA line, and an unplanned trip to CVS.   I learned I will no longer change my travel packing plans (whether or not I check luggage).   What can you learn?   There is a cost in time and/or dollars if you don’t follow a set process.   The best starting place is to work with your business and/or IT partners to reach consensus on that right level of rigor – and stick with it.

Photo provided by CogDogBlog via Flickr (Creative Commons License).


MAHopper_BWMary Anne has 15 years of experience as a data management professional in all aspects of successful delivery of data solutions to support business needs.   She has worked in the capacity of both project manager and business analyst to lead business and technical project teams through data warehouse/data mart implementation, data integration, tool selection and implementation, and process automation projects.

]]>
http://www.b-eye-network.com/blogs/dyche/archives/2010/10/the_price_of_a.php http://www.b-eye-network.com/blogs/dyche/archives/2010/10/the_price_of_a.php Thu, 28 Oct 2010 06:00:00 -0700
Governance at the Turkey Fry

By Mary Anne Hopper, Senior Consultant

TurkeyFry_Pedula_Man
There is a long-standing tradition in our area for the annual turkey fry and it is the competition for 'Best Dish.' The first year we made awesome grilled artichokes and lost to boxed lemon squares (no, I’m not bitter). The second year, we deep-fried artichokes, wrapped them in bacon, and put them on a stick and lost to some chocolate cake monstrosity. The third year I walked in the door and was appointed to the judging team. How exciting—I would finally figure out how to win the contest. Well, guess what?   The host said there were no rules and to just pick something, and keep in mind that desserts always win.

Sound familiar? It probably does because this is how a lot of you determine what BI projects you’re going to work on. Here is a sampling of techniques I’ve heard from some of you:

  • We pick what looks most interesting.
  • The queue is prioritized based on the level of the requestor.
  • [Insert name here] from the PMO decides.
  • Prioritization?   Everything is the most important so we work on it all.
  • If it’s not breaking, we don’t touch it.
  • I don’t know how stuff gets done but my request always seems to be at the bottom.

I’m going to suggest to you this isn’t the most effective or collaborative way to manage your portfolio of projects not to mention that the business isn’t getting as much value as they could out of you. So how do you fix it? A good place to start is by developing a standardized and transparent project intake process for requests and prioritization. Build the process with your key business stakeholders and then stick to it—the first exception is the one that begins to derail your process.  

As for the ‘Best Dish’ award – not being a dessert person, the potato casserole won.

Photo provided by Pedula Man via Flickr (Creative Commons License).


MAHopper_BWMary Anne has 15 years of experience as a data management professional in all aspects of successful delivery of data solutions to support business needs.   She has worked in the capacity of both project manager and business analyst to lead business and technical project teams through data warehouse/data mart implementation, data integration, tool selection and implementation, and process automation projects.

]]>
http://www.b-eye-network.com/blogs/dyche/archives/2010/10/governance_at_t.php http://www.b-eye-network.com/blogs/dyche/archives/2010/10/governance_at_t.php Thu, 21 Oct 2010 06:00:00 -0700
Repeating the Basics

By Mary Anne Hopper, Senior Consultant

Boats - photo by John Hopper
It’s not like I spend my weekend and down-time thinking about work but some of the lessons I learn over the weekend apply to Monday thru Friday.   Case in point, my kids recently attended a racing seminar with an Olympic sailing hopeful.     He worked with them for three solid days focusing on boat handling drills.   Some of the kids wanted to know why.   His response – because you have to be able to repeat the basics the same way every time so you can deal with all the things that will be different every time (like wind velocity, wind shifts, waves, other boats, etc).

How does that relate to BI projects?     In order to deliver value to our business partners in the timeframe they expect, we have to be able to execute our projects repeating the basics every time so we can deal with the things that will be different every time.     I hope that sounds familiar.

Two great starting points are the project intake and requirements processes.   I use the phrase ‘starting point’ for a reason.   The intake process defines what work your BI team will be working on and in what order.   After those requests become defined projects, the requirements process (business, data, functional/application) then define what is going to be delivered to the business.   No matter how well defined the design and delivery processes are, the beginning of the cycle is imperative to success.   The table shows some examples of what process components need to be consistent and repeatable and what types of things are likely to change on you.

The Basics What's the Same What's Different
Project Intake
  • Request cataloging
  • Prioritization
  • Business users
  • Business priorities
  • Resource availability
Requirements
  • Business requirements supported by data and functional/application requirements
  • Artifacts
  • Acceptance criteria
  • Sign-off
  • Data availability
  • Data quality
  • Business users

This is a short example list and by no means all inclusive – there are numerous other examples.   The take-away is to understand that ongoing delivery success starts is dependent on the basics.   And nailing the basics with consistency will allow for more easily handling all the things that continue to change.

Photo provided by John Hopper.


MAHopper_BWMary Anne has 15 years of experience as a data management professional in all aspects of successful delivery of data solutions to support business needs.   She has worked in the capacity of both project manager and business analyst to lead business and technical project teams through data warehouse/data mart implementation, data integration, tool selection and implementation, and process automation projects.

]]>
http://www.b-eye-network.com/blogs/dyche/archives/2010/10/repeating_the_b.php http://www.b-eye-network.com/blogs/dyche/archives/2010/10/repeating_the_b.php Thu, 14 Oct 2010 06:00:00 -0700
Don Quixote and Change Management

By Carol Newcomb, Senior Consultant

Donquixote2
What does ”change management” mean to you?   Is it the same thing that other people think of?   Do you even know?   Do you have a definition?   I’ll bet you have more than one!

Some things I’ve heard in my recent travels include:  

  • Project plan for implementing changes to data management processes
  • Communications plan for alterations in the SDLC
  • Cultural change required to digest or accept more formalized processes and standards
  • High-level leadership directives enticing more cooperation among the ranks

If you get the feeling that ”change management” is a mirage, one of those windmills in the distance that keeps fading in and out of focus, maybe you’d better get off that donkey and start asking some questions.   Like ”where did this idea come from” and ”whose idea is it?”   Let’s set some ground rules for change management.

I.   What’s the end goal?  

This is the vision, the purpose.   Is it enlightenment, or something more tangible, like faster time to delivery, more efficient team coordination, or a call for cultural change?   Change management needs a name and a specific goal.   Otherwise, my windmill looks different than your windmill.

II.   How will you measure it?

If you can’t measure it, it doesn’t exist.   Think about it—anything tangible has a weight, a height, a mass, or a velocity per unit.   Even black holes can be measured.   Change is a progression from one state to another, and not to put too tight a noose around it, but if you don’t know where you started, you’ll never know where you are in your journey.

III.   How will you know when you have achieved it?

Back to the vision-- or the goal.   Is there an objective to your journey, or are you just wandering and taking it all in?   How far away are you from reaching your destination?   Do you have a sense of how many more resources you need; what kind of resources, what roadmap will guide them to the destination?   When can you say to the team, ”Enough already”!

IV.   How will you share it or communicate it?

Can other people learn from your experience in your journey, or will it seem like another distant mirage to them as they embark on their own travels?   What kind of institutional learning can you build to help other areas or individuals achieve similar goals?   What kinds of differences are relevant and notable?

Change management will be a guide and a journal for others in the future.   Don Quixote never found his windmills, but NASA put a man on the moon.   So, take a practical approach.   After all, once you’ve seen all those windmills, and you never reach them, how much more time and effort are you going to expend before ditching the whole journey?

photo by spotter_nl via Flickr (Creative Commons License)


CarolNewcomb_thumb Carol Newcomb is a Senior Consultant with Baseline Consulting. She specializes in developing BI and data governance programs to drive competitive advantage and fact-based decision making. Carol has consulted for a variety of health care organizations, including Rush Health Associates, Kaiser Permanente, OSF Healthcare, the Blue Cross Blue Shield Association and more. While working at the Joint Commission and Northwestern Memorial Hospital, she designed and conducted scientific research projects and contributed to statistical analyses.

]]>
http://www.b-eye-network.com/blogs/dyche/archives/2010/09/don_quixote_and.php http://www.b-eye-network.com/blogs/dyche/archives/2010/09/don_quixote_and.php Thu, 30 Sep 2010 06:00:00 -0700
The Data Governance eBook: Maps, Mechanics and Morals
DGeBook_400

Recently, we launched our newest e-book, The Data Quality eBook. We ran an excerpt from that book last week. But did you know we have another e-book out there? Earlier this year, we published The Data Governance eBook to rousing success. If you haven't had a chance to download and read this book by Kimberly Nevala and Stephen Putman, don't worry—it's still available online.

Here's a description:

When the fictitious business team at the SpectraDynamo corporation launches a new data governance initiative, the trials and tribulations begin. The Data Governance eBook, authored by the experts at Baseline Consulting, is an entertaining business parable with a message aimed at helping you launch data governance the right way. This bold new look at data governance is more than just a cautionary tale. It's what Baseline partner Jill Dyché calls "a prescription for getting it right." The Data Governance eBook includes all the nuts and bolts necessary to transcend the requisite decision-by-committee and role confusion so endemic to companies’ information management efforts.

Here's a brief excerpt that showcases what you'll find in our first e-book:

The moral of the story: data governance can be tough. By definition, innovation is disruptive and antithetical to established paradigms. Applying rigor in the form of governance and oversight to information is no exception to this rule. Mix the laundry list of competing information needs and issues in the average enterprise with the inevitable confusion regarding ‘what does it mean’ and the data governance proposition quickly becomes overwhelming. The multitude of data governance definitions and methodologies parlayed by both industry analysts and vendors—is it: data quality? information policy? a dashboard? a council?—certainly doesn’t help.

The fact is: there is no standard playbook for data governance. The structure of your program is dependent on the specific problems governance needs to address and the organization’s incumbent organization and culture.   Some companies—like SpectroDynamo—find a bottom-up approach centered around core data management functions provides the greatest initial leverage. Organizations that are more hierarchical or consensus driven in nature may require formal enlistment of executive sponsorship and deliberate assignment of decision rights for governance to take root. And yes, you may start somewhere in the middle...

What is consistent? The fundamental roles and responsibilities found in functioning data governance programs. In this section, we define those core functions: executive sponsorship, data governance (policy making and decision rights), data management and data stewardship. These functions align with and support everything from strategy alignment and organizational buy-in to tactical enablement and operations.

When evaluating these capabilities, remember the single most important success factor for entrenching data governance as a core operating principle: continuous improvement. Rather than boiling the ocean, identify key business pain points and the core capabilities required to address them. Prioritize and pick a pilot project. Finally, prototype data governance-related processes and don’t be afraid to refine them as your capabilities and reach grow.

DG_Structure

Interested in knowing more? Download The Data Governance eBook today!

]]>
http://www.b-eye-network.com/blogs/dyche/archives/2010/09/the_data_govern.php http://www.b-eye-network.com/blogs/dyche/archives/2010/09/the_data_govern.php Thu, 09 Sep 2010 08:00:00 -0700