We use cookies and other similar technologies (Cookies) to enhance your experience and to provide you with relevant content and ads. By using our website, you are agreeing to the use of Cookies. You can change your settings at any time. Cookie Policy.


Implementing Metrics Management for Improving Clinical Trials Performance

Originally published July 17, 2008

In the fast paced world of clinical trials research, we are inundated with numbers. P values, coefficients, eigenvalues. And, in some cases, these are accompanied by Greek symbols to give us the illusion that they are correct and indisputable – statistical evidence to support our theories. Yet we have seen too many companies measure the wrong things. Worse yet, they try to measure everything – and, while they may have lots of data, the results are less than compelling in terms of driving improved performance.

Metrics can be used to help us improve clinical trial performance, establish operational benchmarks for performance, create a balanced view (or scorecard) that monitors not just trial performance, but also financial metrics, customer satisfaction, quality and organizational growth and performance. We can, of course, use these metrics to identify substandard internal performance and industry-relative performance.

In this article, we will talk about metrics – those numbers that we use to set expectations for future behavior based on past performance and numbers that define how someone is doing or should be doing. On a grander scale, we can estimate how long a clinical trial should take by therapeutic area or perhaps estimate patient recruitment costs. But the focus of this article is really about understanding what metrics will actually improve performance.

Metrics: A Retrospective

Measurement has been discussed since the art of crafting 1’s and 0’s into beautifully orchestrated symphonies of functional relevance started a half a century ago. Measurement has a long tradition in natural sciences. At the end of the last century, physicist Lord Kelvin (1824-1904) formulated the following statement about measurement (Kelvin, 1891):

“When you can measure what you are speaking about, and express it into numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind: It may be the beginning of knowledge, but you have scarcely in your thoughts advanced to the stage of science.”

Taken together, measuring progress is a good thing. In fact, there is evidence to support that simply just measuring something makes it better. Measurement can certainly provide motivation, which is one of the central theories about why metrics work as a tool to measure productivity.

Triplett (1898) observed that bicycle racers have faster times when they race against others than when they race alone. Based on this observation, he hypothesized that the presence of competitors causes people to perform tasks faster. The general phenomenon demonstrated by Triplett is called social facilitation, which refers to the faster (or better) performance of a task when others are present. Triplett's study was one of the first to demonstrate this claim experimentally.

More recently, when Don Clifton took over the Gallup organization in the 1960s, he noted huge improvements in keypunch operators’ performance when he told them how many keystrokes they could produce versus their fellow employees.

Measurement and Motivation

There is a saying,  “Tell me how someone is measured, and I will tell you how they behave.” We have to be very careful to measure the right things in order to elicit the correct behavior. We all know, for example, that most used car salesmen get paid a commission on the total cost of the car they sell you. They are rewarded to sell the car at the highest price possible.

Motivation, talent, drive, dexterity, and experience are all factors in determining how successful a programmer can be at a given a task. With enough motivation, any ordinary person can become a world class athlete. Without it, the same person could be begging for change downtown. Even a tremendously talented programmer will go nowhere without motivation. Why are some people always so motivated? What are the sources of their motivation? This was a central theme when Triplett studied the effects of audience and competition on performance in the late nineteenth century. Though a great deal has been written on motivation since then, it is still an individual construct. As a leader or an artisan, you need to identify what motivates you or your people and cultivate the sources of your motivation. Obviously, individual differences will no doubt play an important role in individual performance.

Clinical Trials Performance Management

As an industry, we are faced with the natural ebbs and flows based on the maturity of a compound as it matures through its life cycle. In the past 15 years, most organizations we’ve worked with have faced a similar challenge – differentiating themselves in a highly competitive market. By nature, these companies are intensely focused on seeking competitive advantage through operational efficiency and innovation within processes which are already tight and well defined (can you spell validation?).

However, while most biotechs, pharmaceuticals, CROs and site management organizations measure, monitor and report on organizational performance, it isn’t always strategically driven and monitored by choosing the right blend of metrics and key performance indicators (KPIs ). Often, we just choose the easiest thing we can measure.

The challenge becomes:

  • Providing timely, accurate, operational performance monitoring to executives and managers so that alignment and execution improve business performance.

  • Making critical information easily available to sponsors and/or business partners.

The problem many organizations face is that most trials management systems are homegrown, manual systems that represent silos of information. The CTMS, CDMS, patient/physician recruiting and PM systems are typically not well integrated across protocols. We have spent most of our careers making this data more accessible, useable, and predictive.

What’s required is the creation of a platform that creates visibility of the entire organization’s performance to support improved decision making by providing better:

  • Monitoring and auditing progress of trial metrics across sites and/or divisions while providing insight into costs, recruiting, staff allocation, revenue, profitability and contract milestones.

  • Executive and operational insights into efficiency, timeliness and quality.

  • Ability to control financial and quality risk during the clinical trial life cycle through improved insight and analysis.


In Figure 1, we show how an integrated platform for metrics management can provide a holistic perspective on clinical trials performance improvement.

alt

Figure 1

Choosing the Right Metrics

We would certainly be doing you a disservice if we were to sit here and tell you what you ought to measure. Obviously, it depends. I know, most people just want the short answer. They say, “Tell me what other companies are doing and I’ll do that.” It really does matter what you care about as a company, what your culture is and what you are interested in achieving. Most CROs, for example, want some insights into whether they are achieving their contracted arrangements with clients (for example, patient recruitment/enrollment, number of queries/ resolutions, change orders, Last-Patient-Last-Visit to Database Lock, financial profitability, average per-patient costs, etc., while pharmaceutical companies tend to care more about the timeliness of the trial, whether or not milestones are being met, adverse events and database lock to final results).

But it isn’t just about counting. You have to consider an entire context when introducing the collection of metrics. We advocate a systematic approach to implementing and using meaningful metrics that goes a long way in completing the picture.

  1. Identify who will eventually consume the metrics (stakeholders).

  2. Determine how metrics will change the way we do business.

  3. Select the right metrics and rationalize the calculation rules.

  4. Figure out what you’re going to do with the results.

  5. Define the reporting mechanism(s) (scorecard, dashboard, real time reports).

  6. Determine the right source of the data.

  7. Collect the data and validate the results (people must believe in the data).

  8. Continuously evaluate the metrics (get feedback, study the utility of your measurement results).

  9. Continuously improve the process.

  10. Communicate the results.

In Figure 2, we have highlighted some of the metrics that we capture for our clients.

alt

Figure 2
 

There is no question that measuring quality and productivity in the context of clinical trials performance improvement is a key differentiator for any company spending its share of the multibillion-dollar drug market. We have to produce higher quality drugs and devices in shorter time periods. We are faced with off-shoring and outsourcing decisions all of the time. Justifying team composition and head count is part of our everyday world – as is balancing workload. It is not a question of if we should measure, but how.

Attempts by organizations and researchers alike to define a correct path for clinical trials measurement have been wrought with complexities. There is still a lack of maturity in the world of measurement and metrics. There is still no standardization of approaches. However, we need to spend the right amount of energy in developing a metrics program that can improve time to delivery, improve quality and continue to motivate and excite the community of people that deliver on that value every day. Deploying the right tools (e.g., business intelligence) can enable your ability to measure what you care about.

Afterwords:

  • WARNING: You’ll get what you measure.

  • Abraham Lincoln: “If I had eight hours to chop down a tree, I’d spend six hours sharpening my ax.”

  • Werner Karl Heisenberg: “Since the measuring device has to be constructed by the observer … we have to remember that what we observe is not nature itself, but nature exposed to our method of questioning.”

  • Lord Kelvin: If you can not measure it, you can not improve it.

  • Lord Kelvin: To measure is to know.

  • The Metricator's Maxim: Not all that counts can be counted; not all that can be counted counts.

  • Alexander's 1st Law of Metrics: Metrics are hard to get on projects which don't keep records.

  • Simplicity is prerequisite for reliability. (Edsger Dijkstra)

  • Program testing can be a very effective way to show the presence of bugs, but is hopelessly inadequate for showing their absence. (Edsger Dijkstra)

  • The competent programmer is fully aware of the limited size of his own skull. He therefore approaches his task with full humility, and avoids clever tricks like the plague. (Edsger Dijkstra)

 

References:

  1. Fenton, Norman E. and Lawrence Pfleeger, Shari (1998). Software Metrics: A Rigorous & Practical Approach. Course Technology; 2nd edition (February 24, 1998).
  2. Garmus, David and Herron, David (2000). Function Point Analysis: Measurement Practices for Successful Software Projects. Published by Addison Wesley Professional.
  3. IEEE: Standard Dictionary of Measures to Produce Reliable Software. The Institute of Electrical and Electronics Engineers, Inc 345 East 47th Street, New York, NY 10017-2394, USA IEEE Standard Board, 1989.
  4. Jones, Capers (1991) Applied Software Measurement: Assuring Productivity and Quality (Software Engineering Series).
  5. Jones, Capers (2000) Software Assessments, Benchmarks, and Best Practices. Addison-Wesley.
  6. Kelvin, W.T. (1891-1894): Popular Lectures and Addresses.
  7. Triplett, Norman (1898). The Dynamogenic Factors in Pacemaking and Competition. American Journal of Psychology, 9, 507-533.
  8. Zuse, Horst (1995). History of Software Measurement.
  • Greg NelsonGreg Nelson

    Greg Nelson is the Founder and Chief Executive Officer of ThotWave Technologies, the health and life sciences business intelligence company. Greg provides professional services to healthcare, biopharma as well as government and academic researchers. Greg has served as the Director of Technology for the largest, privately held CRO, Director of Application Development for the Gallup Organization and a director at the University of Georgia’s computer center. He has published and presented more than 150 professional papers in the United States and Europe.  

    While Greg has been a practitioner for the past 23 years, his academic roots began with a BA in Psychology from the University of California at Santa Cruz, in addition to doctoral level work in Social Psychology and Quantitative methods at the University of Georgia. Greg also holds a Project Management Professional Certificate. Greg can be reached at greg@thotwave.com.

    Editor's note: More articles, resources, news and events are available in the BeyeNETWORK's Health & Life Sciences Channel. Be sure to visit today!

Recent articles by Greg Nelson

 

Comments

Want to post a comment? Login or become a member today!

Be the first to comment!