Blog: James Taylor Subscribe to this blog's RSS feed!

James Taylor

I will use this blog to discuss business challenges and how technologies like analytics, optimization and business rules can meet those challenges.

About the author >

James is the CEO of Decision Management Solutions and works with clients to automate and improve the decisions underpinning their business. James is the leading expert in decision management and a passionate advocate of decisioning technologies business rules, predictive analytics and data mining. James helps companies develop smarter and more agile processes and systems and has more than 20 years of experience developing software and solutions for clients. He has led decision management efforts for leading companies in insurance, banking, health management and telecommunications. James is a regular keynote speaker and trainer and he wrote Smart (Enough) Systems (Prentice Hall, 2007) with Neil Raden. James is a faculty member of the International Institute for Analytics.

July 2010 Archives

I recently wrote an article for the IIA on decisions, decision management and analytics. This was prompted by Tom Davenport's recent interview on the Sloan Business Review on Reengineering your decision making processes about analytics and how companies make decisions. This interview also prompted Boris Evelson of Forrester to write this blog post on decision management being possibly the last frontier in BI. Boris made a couple of excellent points in his post.

First he pointed out that, while companies should consider decision making something they should understand and systematically improve, all decision making is not the same. First decisions can be divided into those that are fairly structured and follow well defined rules and approaches and those that are more unstructured and collaborative. Structured decisions tend to lend themselves to precise descriptions of how to make the decision and repeatable analytics. Collaborative or unstructured decisions tend to lend themselves to exploration and visualization tools in contrast. Decisions can also be divided into automated and manual decisions.

Now, some time ago Neil Raden and I did some work on the characteristics of decisions. Boris's collaborative/structured division combines two - the approach to making the decision and how repeatable the decision is. Other characteristics that really matter when it comes to deciding how to automate or support decisions include how measurable the decision is, how long it takes to see whether you made a good decision or a bad one, and how much difference there is between a good one and a bad one.

Whether you currently automate a decision or not, it seems to me, is a more transient characteristic of a decision - a consequence of other more fundamental ones. Companies should not be dividing up their decisions into manual and automated so much as conducting a decision audit or decision discovery to understand what decisions they have so they can make the right automation and decisioning technologies choices.

The importance of ongoing measurement and analysis, however, is an area where Boris and I are in strong agreement. The three phases of decision management are decision discovery to find the decisions that matter, decision services to build components to handle those decisions and then decision analysis to ensure that you continue to improve decisions over time.

As Boris points out, this last one is critical. If you don't track the results of decisions you will never know what works and what does not. This is part of the reason I think it is so important to map decisions to Key Performance Indicators or KPIs so that you understand how each decision contributes to the measures that matter to you. Beyond tracking, though, if you don't create a feedback loop so that you can improve decisions based on this your decision making will stagnate. This means it will get less good - decisions cannot be static as a good decision is good only in a context and that context changes continually. I would add that experimentation is also important. You need an ability to create challengers to your current decision making approach, test them on some decisions and compare results to see if a new approach would be preferable. If you look at companies successfully using analytics they have all of these - good decision results tracking, a formal feedback loop to keep improving a decision and an ability to challenge existing decision making with new and innovative approaches.


Posted July 28, 2010 4:30 PM
Permalink | No Comments |

Some time ago I was at a warranty conference and there was an interesting discussion about registration cards. You know, those postcard sized mailers you are asked to return to register your product. They often have all sorts of demographic and interest questions - asked by the company to flesh out its 360-degree view of its customers.One of the speakers was asked about this and he argued that, in fact, companies should ask for the absolute minimum information on these cards. This would, he said, increase response rates and would have little or no effect on the value of the data because all the demographic data could be purchased anyway once you had the list of customers and some basic information about them. In other words companies were identifying fewer customers because they were worrying too much about the amount of information they have about those customers. I took a couple of lessons away from this.

First, always consider the potential for external data to improve an internal process. Just because you want some data it does not mean you have to ask the customer for it. Buying external data and integrating it might be more cost-effective. And you might find you can infer the data analytically too, using historical records like purchases or returns to derive customer characteristics like preferences or approach to online purchasing.

Second it reminded me of the importance of beginning with the decision in mind. Too often I see companies embarking on data integration and quality initiatives designed to improve all their data - presumably so they can make better decisions - without really thinking through what those decisions are. If you begin, instead, with the decision, then you might find that you only need some of your data integrated, that some of it is good enough to make the decision (even though it is pretty dirty) or that some of the data you need has to be sourced from outside the company anyway.  If you don't know which decision you wish to make or improve then you can't know which data is truly important.


Posted July 28, 2010 12:55 PM
Permalink | No Comments |

IEEE ICDM Contest: Road Traffic Prediction for Intelligent GPS Navigation

Over the last century, the number of cars engaged in vehicular traffic in cities has increased rapidly, causing many difficulties for all citizens: traffic jams, large and unpredictable communication delays, pollution, etc. Excessive traffic became a civilization problem that affects everyone who lives in a city of 50,000 people or more, anywhere in the world. Complexity of processes that stand behind traffic flow is so large, that only data mining algorithms may bring efficient solutions to these problems.

The task of this year's ICDM Data Mining Contest is to predict city road traffic for the purpose of intelligent driver navigation and improved journey planning. The three contest problems are related to congestion forecasting, modeling of traffic jams, and smart navigation based on real-time GPS monitoring. Datasets come from a highly realistic simulator of city traffic, Traffic Simulation Framework. The competition is organized on TunedIT Challenges data mining platform by the team of researchers from University of Warsaw, Faculty of Mathematics, Informatics and Mechanics. Prizes worth $5,000 in total will be awarded to the winners.

Everyone is welcome to participate. Competition starts now and will last till September 6th, 2010.

More details here

Posted July 6, 2010 7:46 AM
Permalink | No Comments |