We use cookies and other similar technologies (Cookies) to enhance your experience and to provide you with relevant content and ads. By using our website, you are agreeing to the use of Cookies. You can change your settings at any time. Cookie Policy.

Blog: James Taylor Subscribe to this blog's RSS feed!

James Taylor

I will use this blog to discuss business challenges and how technologies like analytics, optimization and business rules can meet those challenges.

About the author >

James is the CEO of Decision Management Solutions and works with clients to automate and improve the decisions underpinning their business. James is the leading expert in decision management and a passionate advocate of decisioning technologies business rules, predictive analytics and data mining. James helps companies develop smarter and more agile processes and systems and has more than 20 years of experience developing software and solutions for clients. He has led decision management efforts for leading companies in insurance, banking, health management and telecommunications. James is a regular keynote speaker and trainer and he wrote Smart (Enough) Systems (Prentice Hall, 2007) with Neil Raden. James is a faculty member of the International Institute for Analytics.

September 2009 Archives

I have been thinking about lift curves this week (no, really, this is the kind of thing I think about) and I thought it was worth spending a post describing them and their value.

BaselineLiftCurve.pngBefore I actually talk about a lift curve I need to give you a little background. The purpose of a lift curve is to show you how good a predictive model is. In order to do that you need a baseline - something to compare the predictive model to. This first graph then is not, in fact, a lift curve but the baseline to which we will compare a lift curve.

The graph I am using is to measure the effectiveness of a model that predicts a true/false variable - let's say that a customer will not renew their subscription (it's a churn or retention model). The baseline shows me how well a random approach would do. In other words if I ordered my customers randomly and called them one at a time the vertical access shows what percentage of the people who will not renew I will have found for a given percentage of customers called (the horizontal access). The arrow shows that, for instance, once I have checked 40% of my customers I will have found 40% of those who plan to cancel. The baseline can be said to represent the "monkey score" - how well a monkey might do.


A lift curve takes the baseline and imposes a curve on it that represents the performance of a model. In this case I have built a model to predict if a particular customer will renew or not. Because this is a model that does better than random the curve is above the baseline.

Each point on the curve can be read similarly. If I use this model to rank order my customers from most risky to last risky, what percentage of the non-renewers will I have found for a given percentage of customers considered. In this case the model, for instance, detects about 73% of non-renewers by the time I have considered 40% of my customers. This is obviously much better than the random approach which would only have found 40% of the non-renewers at the same point in the process.

LiftCurveBoostSave.pngSo what does this mean in business terms - well it means I can boost my results without increasing my costs or I can reduce my costs without impacting my results.

If, for instance, I had the money or resources to call 40% of my customers using the random approach and I spent the same amount of money to call customers using my model I would boost my results - instead of only reaching 40% of non-renewers I would now reach 73%. Same cost, better results. This is show by the Boost arrow.

But maybe I think it is OK to reach 40% of my non-renewers. If this is the case I can use my model to reduce the percentage of customers I must call from 40% to about 15%. Same results, lower costs. This is shown by the Save arrow.

Predictive models can be used to boost results or reduce costs and which is better is going to depend on business circumstances - not least because predictive models don't DO anything, they just predict. Unless put to work in decision-making they are of no practical value.

Interestingly, my buddy Eric Siegel is giving a webinar with me on Optimizing Business Decisions - How Best to Apply Predictive Analytics next week (I am giving one on 5 core principles of Decision Management this week) - worth checking out if you want to learn more. Eric and I are also both speaking at Predictive Analytics World, a great event in DC in October. Hope to see you at one or all of these events.

Posted September 21, 2009 10:29 PM
Permalink | No Comments |

Curt Monash has been Thinking About Analytic Speed over on The Intelligent Enterprise Blog and makes some good points about the different kinds of analytic speed. One area I find lots of confusion in discussions of analytic speed that Curt does not touch on is the difference in time to build an analytic model and the time to execute it.

When you are using analytics in a real-time, operational system (and you should be) there is a big difference between building a new model in real-time and executing an existing model (scoring the transaction) in real time. Many systems require that you calculate the value of a predictive model for the current transaction in real-time so you can use it - how likely is this transaction to be fraudulent, what's the retention risk of this inbound customer - but many of these models can be built offline.

You can harness offline processing power to build models, crunching lots of data and trying many different algorithms before deploying the result of all this work as a simple to execute element of a decision - it is often just a few rules, an additive scorecard or a formula. Just because you need the result in real time does not mean you need to figure out the math in real time. Something else to bear in mind when worrying about analytic speed.

If this is a topic that interests you, why not come to Predictive Analytics World next month and hear me and a bunch of other interesting people tell you all about it?

Posted September 14, 2009 7:53 PM
Permalink | No Comments |

I have been following the recent IBM announcements on analytics closely and have been struck by the increasingly decision-centric point of view being expressed. First there was the business analytics and optimization announcement with its focus on "action support" not "decision support". The new analytic appliances with their focus on making it easier to make better decisions was next and now there is information-led transformation. This latest focus area talks about optimizing every transaction, process and decision at the point of impact, and without requiring that everyone be an analytical expert. Embedding executable analytics, predictive analytic models, into transactional systems (a key element of decision management) is clearly critical to this vision. Similarly the focus on "micro optimization" and on pervasive, predictive real-time decisions at the point of impact meshes well with Decision Management's focus on automating and improving micro-decisions.

This is great news for those of us focused on decisioning. Clearly the drive to make predictive analytics more pervasive and the need to make data-driven operational decisions is pushing more and more companies to consider decision management. Add business rules into the mix, to handle compliance and the last mile of automation, and the picture will come ever more clearly into focus.

Posted September 10, 2009 12:50 AM
Permalink | No Comments |