Oops! The input is malformed! A Strategy for Moving from Reactive to Predictive Analytics by Elliott McClements - BeyeNETWORK
We use cookies and other similar technologies (Cookies) to enhance your experience and to provide you with relevant content and ads. By using our website, you are agreeing to the use of Cookies. You can change your settings at any time. Cookie Policy.


 

A Strategy for Moving from Reactive to Predictive Analytics

Originally published April 18, 2011

IHS CERA published an interesting article in November 2010 entitled “Toward the Predictive Organization.” The main focus of the piece was the premise that most oil and gas exploration and production (E&P) companies tend to be operating in a reactive manner, with ad hoc processes launched to handle events after they have occurred, as opposed to anticipating and hopefully mitigating the impact of particular events. E&P organizations have always had extensive instrumentation capabilities, but have these systems been utilized to predict and eliminate errant events? The article outlines some of the headline business benefits that can be achieved, details some of the organizational cultural challenges that need to be addressed, and helpfully illustrates them with real use cases. I don’t intend to regurgitate the article here; but as a companion piece to the original article, I do want to explore some of the essential data management prerequisites for predictive organizations.

Before doing that, I’d like to try to gain some clarity in terminology. There is a tendency to use the terms “business intelligence” and “analytics” (be they predictive or advanced) almost interchangeably. There is some overlap between the disciplines, but they are somewhat different. I like the description James Taylor uses in his JT on EDM Blog where he describes ”a sliding scale from BI to descriptive and predictive analytics – from descriptive knowledge about the past to increasingly prescriptive predictions about the future.” This summarizes a picture I have used for many years to illustrate the evolution from reporting to fully embedded analytics – see below.

That doesn’t mean that analytical services cannot be developed in isolation to a supporting information management program, but it does mean that many of the ancillary benefits from information management (e.g., data quality, governance, integration and rationalization) are forsaken and need to be developed from scratch on each analytical project to ensure that the analytics use accurate data and are properly in context. This may result in radically different and inconsistent data management approaches being adopted. I would argue that analytical services benefit significantly if they are able to build upon a successful and rigorous information management program. So let’s explore how such a foundation can be developed.

Firstly, it’s important that a roadmap or strategy is determined to take the organization’s business intelligence and analytics from its current state to where it wants to be in 2 – 5 years. Such a strategy needs to be driven by the business priorities and needs to be pragmatic rather than purist. Business priorities may mandate the need for some tactical implementations that may not align perfectly to the longer term strategy but should be compliant if possible. Similarly, the strategy should be revisited on a regular – e.g., bi-annual basis to ensure ongoing alignment with business priorities.

The cornerstone of successful information management is recognizing the need for a “single version of the truth.” That doesn’t mean that the slavish adoption of an enterprise data model is required but that a pragmatic approach to moving toward that vision is followed.

Data proliferates across the whole oil and gas supply chain and is typically managed and maintained independently in each of the different functional groups. Integrating and interchanging data across these functional “silos” is a major challenge since the specialized applications in each function tend to use proprietary data models optimized for their specific task. Connecting the data requires detailed analysis and understanding of the different data models combined with the ability to transform and translate the data into a common form. You can purchase specialist tools that can achieve this point-to-point data integration via pre-defined connectors and translators. They enable a limited degree of cross-functional integration in a virtualized synthesized form and provide a good tactical solution but are expensive to deploy at scale. Such point-to-point integration may help address the immediate reporting challenge but cannot adequately meet the strategic analytical requirements.

To realize the “single version of the truth,” it’s necessary that companies adopt a pragmatic and cost effective enterprise information management (EIM) strategy that is directly aligned and prioritized by the company’s overall business objectives. It has to encompass the complete information lifecycle, from source data acquisition through to delivery to end users, and processes in reports, dashboards or whatever required format.  The EIM strategy is key to establishing a single holistic view of operations that is accurate, trusted and used by all relevant users and functions in the business.

Although the end goal may be the establishment of a physical “single version of the truth,” it is recognized that achieving that goal will be the result of a long journey and there may be interim solutions that can act as stepping stones. As an industry, oil and gas has collaborated very effectively to establish industry standards such as Production Markup Language (ProdML), Wellsite Information Transfer Standard Markup Language (WITSML) and Professional Petroleum Data Management  (PPDM).  While these standards are being adopted in companies, many of the established applications do not leverage them yet. Assets such as the PPDM data model do present an excellent reference point to enable organizations to standardize their data management. Adopting master data management (MDM) techniques on PPDM establishes the linkage of the disparate operational data definitions into a common central definition. Such an undertaking begins to connect the dots and provides the basis for further evolution. The organization can begin to see the cross-functional relationships that may have been previously masked.

Supplementing the MDM infrastructure with data federation techniques can virtualize elements of the “single version of the truth.” While again this can be a valuable interim step, it can’t be used effectively for large-scale integration since the cost of ongoing virtualization becomes difficult to sustain. It does facilitate the development and deployment of analytical applications, which would not have been previously possible. It is likely that a tipping point will be reached where some of the virtualized data relationships can be more cost effectively delivered via physical instantiation. This should be addressed from a corporate perspective and be the framework for a corporate data store or data warehouse.

All the aforementioned stages should be conducted under the auspices of a data governance program to ensure the necessary rigor and stewardship is applied to the data analysis, modeling and integration processes. The data governance program should also implement a data quality regime to measure data quality, identify pollutants and remediate at source. The data governance function must not be an IT-centric unit but should have key stakeholders from the business function actively participating – they should act as data stewards with the authority to arbitrate on and ultimately make data ownership decisions. This business input should also determine the roadmap and relative priorities as to when specific data sources should be incorporated into the overall program. The data governance function should be established at the initiation of the information management program; otherwise, there is a risk that any information management developments that pre-date its existence will not be conformant with its policies.

Following this approach is very much akin to building a house. The strategy equates to the architectural blueprints and plans; the data model is the foundation; the reference data the bricks; and the governance the mortar that binds it all together, providing the infrastructure for information and analytics to flourish.

SOURCE: A Strategy for Moving from Reactive to Predictive Analytics

  • Elliott McClementsElliott McClements
    Elliott has more than 25 years of experience working with leading companies across Europe to fully leverage their data assets. He started his consulting career with Codd and Date at the advent of relational database technologies, advising organizations how to transition toward relational technologies and what became known as business intelligence. He then spent over 10 years with Teradata helping to define and mature the science behind data warehousing. There followed seven years with Ab Initio, addressing the challenges of enterprise data integration, data quality, metadata and service orientation of data assets. Having gathered extensive expertise in telecommunication, finance and retail, he is now a key part of the HP BIS team helping to leverage data and business intelligence in the energy industry.

    Editor's Note: Elliott McClements leads the BeyeNETWORK's Energy Channel. Be sure to visit today for more helpful articles and resources relating to business intelligence in the energy industry.

Recent articles by Elliott McClements

 

Comments

Want to post a comment? Login or become a member today!

Be the first to comment!