We use cookies and other similar technologies (Cookies) to enhance your experience and to provide you with relevant content and ads. By using our website, you are agreeing to the use of Cookies. You can change your settings at any time. Cookie Policy.

Measuring Intangibles: Breaking Down Analytic Barriers

Originally published September 3, 2009

Much of performance management depends on measurement, and many of the measures are financial. But financial measures often fail to have real impact on business performance. With the Balanced Scorecard, Kaplan and Norton sought to guide us away from predominantly financial measures and toward a balanced collection of metrics for finance, process, customer and people.1 Yet we have a tendency to migrate back to financial measures. One widely used customer metric, for example, is customer lifetime value (CLV). CLV, no matter how often we place it in the customer quadrant of a scorecard, is really more about finance than about customers. Just consider the definition of CLV – the present value of future profit from a customer relationship – measuring customers in units of dollars.

The problem is that CLV tells us nothing about customer behaviors; thus there is no opportunity to understand or to influence those behaviors. This problem isn’t unique to customers. The same tendency to drift back to financials is found in other quadrants of the scorecard. A predominant process measure is cost-of-rework – actually a financial measure that tells us little about process effectiveness and certainly offers no clues about the causes of rework. Similarly, in the people quadrant, we find financial measures such as cost-of-hiring and cost-of-retention.

So what is really happening here? Why are we so quick to revert to financial measures? I believe we do this because so many of the things that we really need to measure seem to be especially difficult to measure. So we too often measure what is easy instead of what is needed. But only measuring the easy things may have a very high cost. The often-quoted Tom Peters line “what gets measured gets done” underscores the importance of measuring the right things. Peters implies that if you’re measuring the wrong things, then you are likely to be doing the wrong things. He also suggests that to do the right things, you need to measure the right things.

Measuring the Right Things

It is difficult to know what things are the right things to measure without first considering the purpose of measurement. Andy Neely describes four reasons to measure – to check position, to communicate position, to confirm priorities and to compel progress.2 All four of these reasons have a place in business analytics, but it is the last two – priorities and progress – that have a place in performance management. And it is through the motivators of priorities and progress that we find high-impact analytics that satisfy the criteria of  purposeful, insightful and actionable.

The right things to measure, then, are those things that help to affirm priorities and to drive progress toward goals. To do so, we need to change the focus from measuring outcomes to measuring influences. Outcome-based measurement uses lagging indicators, which help to monitor past performance but contribute little to managing future performance. Measuring influences, on the other hand, uses leading indicators – predictors of future performance and leverage points to manage the future.

The Trouble with Influence Measurement

The challenge of influence-based measurement is that influences are typically less tangible than outcomes. It is easy to count dollars in an account or widgets in inventory. Measuring profit, ROI, ROA and the like is straightforward and well defined. Influences are less concrete – less easily counted – than outcomes. They are less tangible, and intangibles are much more difficult to quantify. How do you count customer satisfaction? How do you quantify employee morale? How can you measure trust, confidence, knowledge, skill, innovation, pride and relationships?

To measure is to quantify. Bernard Marr describes measurement as “the assignment of numerals to represent properties.”3 Thus, to measure intangibles we need a process by which we can assign numerals to represent the properties of intangible things. Figure 1 illustrates a simple four-step process of context, definition, collection and application. It should not be surprising that much of the work lies in the definition step.


Measurement Context

Establishing context is the essential first step for measuring intangibles. Context helps to determine measurement requirements. It is a guide to the degree of consistency and accuracy that the measures need to satisfy. Remember that we’re assigning numbers to things that are not naturally and intuitively counted. There will always be some degree of subjectivity and some level of uncertainty associated with measures of intangibles. But subjective and uncertain does not imply untrustworthy or without value. The measures need to be sufficiently trustworthy to satisfy their purpose.

When setting measurement context ask these questions:
  • What is the purpose of measurement?

    It is likely that measures intended to drive change demand a much higher level of certainty than those that are used simply to check position. Consider a continuum of five levels of purpose:

    1. Check present position

    2. Set goals

    3. Affirm priorities

    4. Forecast future position

    5. Create change
Clearly, as you progress from level one to level five, you experience increasing impact of decisions made on the basis of the measures. Impact is a significant consideration when making decisions about objectivity and rigor of measurement.
  • How objective do the measures need to be?

    With the purpose of measurement understood, the next consideration is the degree to which measures are verifiable, free of bias and not influenced by emotion. It is quite natural to assume that verifiable, non-biased and unemotional measures are always of greatest value. That assumption, however, isn’t always true when working with intangibles. Measures of human perception such as customer satisfaction and employee morale, for example, are certain to lose value when bias and emotion are removed. Nor are verifiable measures always the ideal. Just consider the variation in responses to an anonymous survey versus one that is traceable to respondents.
  • What rigor of measurement is necessary?

    A target level of objectivity quite naturally leads to consideration of rigor – the level of discipline in a measurement process and the areas to which that discipline is applied. Every measurement process involves rigor, but the degree and focus of rigor vary depending on measurement needs.

    When measures need to be verifiable, then measurement processes demand a high level of discipline, structure and standardization to support traceability. If verification isn’t a strong consideration, these factors become less significant.

    Other factors come into play with measures of perception. Certainly we want bias, but not all bias. The bias that is represented in individual responses or observations is desirable; it represents the real perceptions of the individuals who are measured or observed. But care must be taken to not introduce undesirable bias – that which is created by poorly worded questions, agenda-driven observers or non-representative populations.
  • How uniform do the measures need to be?

    Consider the frequency of measurement and the probability that you will want to compare similar measures from different time periods. What level of consistency is needed across measurement cycles? What degree of variation is acceptable?
Consider also the possibility of measuring different populations – customers in different geographic locations or of different age groups, for example. How likely is it that you’ll want to combine, compare or contrast measures from multiple populations? Again, what level of consistency is needed?

Measurement Definition

With the requirements derived from context – purpose, objectivity, rigor and uniformity – you are now ready to define the measures. Measurement definition is a process of identifying several components of each measure. Figure 2 illustrates process results with an example.


With context as a guide, measurement definition answers six questions:
  • What is the subject of measurement?

    To answer this question, simply identify the “thing” to be measured – that which data modelers know as entities. Typical measurement subjects include customers, employees, products, competitors, etc. Measurement subjects are real and tangible things. We haven’t yet made the shift to intangible.

    In the example shown in Figure 2, the subject is customer.
  • What are the qualitative properties of the subject to be measured?

    Here we want to identify the characteristics of the subject that need to be measured. Don’t attempt at this point to describe how they will be quantified. Simply give names to the properties. This is the point where we make the shift from tangible to intangible.

    Figure 2 shows two examples of qualitative properties – customer satisfaction and customer loyalty.
  • What are the quantitative properties of the qualities to be measured?

    The real question here is: What needs to be quantified about each property? Keep in mind Bernard Marr’s definition of measurement: “the assignment of numerals to represent properties.” The answers to this question come in the form of things to which numbers can be assigned – size, intensity, magnitude, degree, strength, volume, duration, etc.

    The example shows quantitative properties level of customer satisfaction and strength of customer loyalty. There is little doubt now that we’ve entered into the intangibles zone.
  • What are the measures for each quantitative property?

    Determining measures is believed to be the hardest part of working with intangibles. This is the point where you decide what kind of numbers to assign in the “assignment of numerals” activity. It isn’t easy, but is much less difficult if the previous seven questions – four from context and three from definition – have been carefully considered.

    The options for assignment of numbers to things that are not readily counted is really quite limited:

    • Indirect measures can be used when one set of measures provides the means to derive other measures. A common example of indirect measurement in physics is to derive the height of an object when the length of shadow and angle of light are known. In business, you might indirectly measure expected customer retention rate by deriving from direct measures of customer tenure and competitor pricing. Indirect measures work well for intangibles when they need to be objective and verifiable.

    • Subjective ranking or rating is useful to measure human perceptions. Ranking asks that a set of items be arranged into an ordered list. Rating asks that a set of items be evaluated against a predefined scale such as a 1 to 5 scale that rates service from poor to excellent. Subjective measures are clearly not objective and may be weak in terms of verification. They are well suited to those intangible measures where there is a need to include human emotion and bias.

    • Proxy measures use a single tangible measure as a substitute for an intangible. You might say, for example, that customer attrition rate is a pretty good indicator of customer satisfaction. Lost customers can be counted, from which we can make some assumptions about customer satisfaction. Proxy measures are a relatively easy way to get a quick assessment of intangibles. The accuracy of that assessment, however, is questionable because the proxy method is inherently incomplete and imprecise.

    • Composite measures use a combination of multiple tangible measures to determine an index value for an intangible. We might produce a customer satisfaction index, for example, as a combination of customer attrition rate, customer retention rate, complaint frequency, service call frequency and customer longevity.
The example in Figure 2 shows two measures. Customer satisfaction is measured as a customer satisfaction score (CSS), determined using subjective ratings obtained from customers. Customer loyalty uses a composite measure called recency-frequency-monetary index (RFMI). RFMI is derived by examining purchase history. We can objectively measure how recently a customer has purchased, how frequently they have purchased, and the monetary value of each purchase transaction. From these measures, RFMI is derived as a loyalty measure.
  • What dimensions are needed for analysis?

    Dimensions provide the attributes for or coordinates of analysis. These are the characteristics by which we segment and summarize data for evaluation and comparison. All of the dimensions familiar to anyone working with analytic data apply here – time, geography, organization, customer, product, etc.

    The example identifies time, product, customer age group, customer gender, and geographic location as the dimension.

  • To what are the measures compared?

    Here we identify the comparators that are the reference points by which a measure becomes information – targets, thresholds, limits, trends, etc. To fully define any measure, we must know how and to what it will be compared. No number has meaning except in relation to some other number or numbers.

    To state that I’m traveling at a speed of 45 miles per hour, for example, provides no real information. Am I in a car, in an airplane or on a bicycle? On the highway or the sidewalk? What is the maximum safe speed? What is the minimum safe speed? Am I accelerating, decelerating or maintaining constant speed?

    The example has target, minimum threshold, prior period value, six-month trend and year-over-year comparison as the comparators.

Measurement Collection

Measurement definition identifies the set of data that needs to be captured for each measurement. Collection is the work of gathering that data. The quantifying data (the numbers) must be collected together with all of the data that constitutes a complete measure – dimensional data, identifying data and essential metadata. Figure 3 continues the earlier example of measuring customer satisfaction (CSS) and loyalty (RFMI) to illustrate data collection considerations.


Measurement collection involves six decision points and design considerations:
  • What population will be measured?

    Population refers to the group of things for which measurement data is collected – which customers, which transactions, etc. To determine the population, you’ll need to consider issues such as:

    • Sampling – Do you need to measure all occurrences of a subject or a representative subset of the population? You might, for example, select a 10% random sample of all customers.

    • Segmentation – Do you need to select a specific subset of the population based on demographics or other characteristics of the occurrences? For example, you may need to measure only those customers who fall into specific age groups.

    • State – Will you exclude some portion of the population based on status? Excluding inactive accounts is an example of state-based selection.

    • Time Span – Will you limit the population to only those occurrences that fall within a prescribed span of time? Selecting only those transactions that occurred within the past 12 months is an example.
Determining population may use any of these criteria and may even use them in combination – for example, select a 2% random sample of transactions from the past 90 days.

Figure 3 illustrates population criteria for the example metrics of customer satisfaction score (CSS) and recency-frequency-monetary index (RFMI).
  • What is the source of measurement data?

    Here the purpose is to identify the best supplier of measurement data. There are many considerations in data sourcing, but paramount among them are the requirements that you have established for objectivity and verification. With these guiding requirements, you may choose among a variety of sources including surveys, systems and databases. It is important that the source is able to provide all of the essential measurement data including quantity, identity, dimensionality, metadata and any data elements needed to select the population.

    Figure 3 shows two distinctly different choices for the example measures CSS and RFMI. CSS uses a customer satisfaction survey that is appropriate for a measure where high levels of objectivity and verification are not required. RFMI uses a data warehouse as the source of purchase transactions. The data warehouse is assumed to be a good source of objective data that is identifiable and readily traced to the point of origin.

  • What is the method of data collection?

    Now that we know what data needs to be collected and from where, we turn attention to how the data will be collected – the specific techniques and mechanics of data gathering. The three methods most readily applied to intangible measurement are observation, survey and instrumentation.

    • Observation involves a person physically watching events or activities and keeping a count of the property of interest. Observation is common at polling places during elections, in traffic-flow analysis and with similar kinds of applications. Observation is labor intensive and may be subject to the biases of the observers and to the inaccuracies of human error. It can work well for one-time or occasional measurement of intangibles, but is impractical for recurring and frequent measurement.

      Consider the traffic-flow example. Observation might include people at a high-traffic intersection who are tasked to count the number of cars that get through each green light, the number of green lights that each driver must endure while stopped and the number of lane changes attempted in a gridlock situation.

    • Surveys are a form of self-reporting that is most commonly used to measure human perceptions and emotions. Surveys are certainly subject to biases of the respondents, but that is a positive quality when measuring intangibles such as satisfaction, morale and commitment. When measures need to be verifiable, then surveys must include a means to identify respondents. It is important to note, however, that identifiable surveys may experience a skew of responses that is not present in anonymous surveys. One important consideration when using surveys is to avoid bias in the survey itself by eliminating or correcting leading or deceptive survey questions. Developing and administering surveys is a skill that must attend to quality of the survey, means of distribution, rate of response, bias introduced by partial responses, and analysis and interpretation of results.

      Again, consider the traffic-flow example. A survey might ask local residents to self-report their experiences with stops at green lights, length of wait at the intersection, difficulty with lane changes, etc. Both the measurement population and the measurement data will be distinctly different from that of observation.

    • Instrumentation uses a mechanical or technological means to collect measurement data. Measurement instruments range from simple mechanical devices such as a timer or counter to databases and computer programs. Many instruments are self-contained, such as a data extract program or a counter at a website. Other instruments are intended to work together with observation, such as a stopwatch and a thumb-click counter when observing some activity taking place. Self-contained instruments are often the most effective way to achieve high levels of objectivity and verification. Instruments can be designed to eliminate bias and to capture tracing metadata.

      Once again, think about measurement of traffic flow. Instead of human observers or a survey of local residents, let’s use a traffic counter (the pressure-sensitive black tube that is placed across traffic lanes) working in conjunction with traffic lights. This allows collection of objective data to know about the numbers of cars passing through each green light as well as the duration of each green. It does not, however, report data about the length of wait before a car passes through the intersection, the number of cars queued and waiting, or the lane change activity that occurs.
Clearly, choice of method of data collection has significant implications for the characteristics and the quality of measurement data. No single method is the one-and-only “right” method. Each has strengths and pitfalls.

Figure 3 illustrates the choice of methods for CSS and RFMI. CSS, with low level of requirement for objectivity, uses a customer satisfaction survey. RFMI, which requires both objectivity and verifiability, uses instrumentation to extract data from a database and calculate index values.
  • What is the timing of the measurement?

    Here we are concerned with how frequently measurement data is collected and at what points in time data is collected. Timing may be associated with the calendar (daily, weekly, monthly, etc), with business cycles (sales cycle, accounting cycle, etc.) or with business events (site visit, purchase transaction, etc.). Timing influences both method of data collection and quality of data.

    Also consider the intended use of the data, especially the need for time-based analysis. If year-over-year comparison is required, then it is obvious that we need to synchronize measures on an annual basis. If trends analysis is a requirement, then even intervals are important to support time-series analysis.

    The example illustrates measurement timing for CSS and RFMI.
  • What identity data is needed?

    Here the interest is in collecting any identifying data that is needed to support traceability and verification of measurement data. Where verification is important, so too is identity. When verification is not demanded or desired, then identity is less crucial.

    The example illustrates identity data choices for CSS and RFMI including explicit statement of anonymous surveys for CSS.
  • What metadata is needed?

    Finally, data collection needs to be concerned with metadata collection. The point of data capture is the only opportunity to also capture certain process metadata such as time of measure, location of measure and other metadata items that may be needed to assess or affirm data quality. Critical metadata for tracing of measurement lineage is also collected as part of the measurement process.

    The example illustrates metadata capture requirements for CSS and RFMI.

Applied Measurement

Without question, measuring intangibles is difficult. But it is also necessary if we are to break through the analytic barriers created by measuring tangibles alone. Measuring only tangibles means that we see lagging indicators – we can monitor performance but cannot actively manage it based on measures. Measuring intangibles makes the leap to leading indicators, and correspondingly advances capabilities from performance monitoring to performance measurement. From goal setting to goal attainment, it is the intangibles that truly drive business performance.

End Notes:
  1. The Balanced Scorecard, Kaplan and Norton, Harvard Business School Press, 1996.
  2. Business Performance Measurement, Neely, Cambridge University Press, 2002.
  3. Strategic Performance Management, Marr, Elsevier Ltd., 2006

  • Dave WellsDave Wells

    Dave is actively involved in information management, business management, and the intersection of the two. He provides strategic consulting, mentoring, and guidance for business intelligence, performance management, and business analytics programs.

Recent articles by Dave Wells



Want to post a comment? Login or become a member today!

Be the first to comment!