Blog: William McKnight Subscribe to this blog's RSS feed!

William McKnight

Hello and welcome to my blog!

I will periodically be sharing my thoughts and observations on information management here in the blog. I am passionate about the effective creation, management and distribution of information for the benefit of company goals, and I'm thrilled to be a part of my clients' growth plans and connect what the industry provides to those goals. I have played many roles, but the perspective I come from is benefit to the end client. I hope the entries can be of some modest benefit to that goal. Please share your thoughts and input to the topics.

About the author >

William is the president of McKnight Consulting Group, a firm focused on delivering business value and solving business challenges utilizing proven, streamlined approaches in data warehousing, master data management and business intelligence, all with a focus on data quality and scalable architectures. William functions as strategist, information architect and program manager for complex, high-volume, full life-cycle implementations worldwide. William is a Southwest Entrepreneur of the Year finalist, a frequent best-practices judge, has authored hundreds of articles and white papers, and given hundreds of international keynotes and public seminars. His team's implementations from both IT and consultant positions have won Best Practices awards. He is a former IT Vice President of a Fortune company, a former software engineer, and holds an MBA. William is author of the book 90 Days to Success in Consulting. Contact William at wmcknight@mcknightcg.com.

Editor's Note: More articles and resources are available in William's BeyeNETWORK Expert Channel. Be sure to visit today!

July 2011 Archives

What do you think about when you hear the term "self-service"?  To some, it's a positive term connoting the removal of barriers to a goal.  I can, for example, go through the self-service checkout line at the grocery store and I'm limited only by my own scanning (and re-scanning) speed to getting out the door.  However, as we've seen with some chains eliminating self-service lines recently, self-service is not always desired by either party.  To some, "self-service" is a negative term, euphemistically meaning "no service" or "you're on your own."

As defined in Claudia Imhoff and Colin White's excellent report, "Self-Service Business Intelligence: Empowering Users to Generate Insights", self-service BI is defined as "the facilities within the BI environment that enable BI users to become more self-reliant and less dependent on the IT organization."

If you put up a poor data warehouse, it is a copy of operational data, only lightly remodeled from source and usually carrying many of the same data quality flaws from the source.  It solves a big problem - making the data available - but after this copy of data, the fun begins with each new query being a new adventure into data sources, tools, models, etc.  What has inevitably happened in some environments is that users take what they need, like it's raw data, and do the further processing required for the business department or function. 

This post-warehouse processing is frequently very valuable to the rest of the organization, if the organization could only get access to it.  However, data that is generated and calculated post-data warehouse has little hope of reaching any kind of shared state.  This data warehouse is not ready for self-service BI.

According to Imhoff and White, the BI environment needs to achieve four main objectives for self-service BI:

1.       Make BI tools easy to use

2.       Make BI results easy to consume and enhance

3.       Make DW solutions fast to deploy and easy to manage

4.       Make it easy to access source data

To achieve these goals, you need a solid foundation and solid processes.  Take account of your BI environment.  While IT and consultancy practices have coined "self-service business intelligence" to put some discipline to the idea of user empowerment, some of it is mere re-labeling of "no service" BI and does not attain and maintain a healthy relationship with the user community and healthy exploitation of the data produced in the systems.  We all know that IT budgets are under pressure, but this is not the time to cut vital services of support that maintain multi-million dollar investments.


Posted July 28, 2011 7:07 PM
Permalink | 2 Comments |

I was part of one of the pioneer credit card fraud detection projects.  It was at Visa and, together with all the similar projects taking advantage of early-stage data mining that were going on about the same time throughout the financial industry, drove credit card fraud down dramatically to all-time lows.  In recent years, as the technology changes, fraud has increased once again.  The financial industry has the online problem to deal with in addition to the ramifications from identity theft and the card skimming that was once falling.  Employees are compromising the data they come into contact with as well. 

Mass compromises occur routinely since thieves can divide and conquer - some can focus on getting the card numbers and others commit the fraud.  There is a robust, efficient black market for card numbers.  Consider the huge breach at Heartland Payment Systems in 2009.  Committing fraud is done with the detection systems in mind.  They often occur in "blitz" mode to overwhelm the system before it has a chance to react and stop transactions.

A recent study by Ovum studied 120 banks and found that counterfeit card fraud is the top issue, with wire fraud second.  Card readers can be purchased much more easily (i.e., on the iPhone) and the number of cards has proliferated, increasing potential for fraud.  While the UK has adopted "chip and pin" technology on the card, the US has not.  This may one day make it more difficult for criminals to cash in on credit card fraud in the US.

Personally, I just count on having to change my credit card numbers at least yearly either on account of outright fraud, the bank (I'll use "bank", but am referring to all financial companies in this article) being compromised or me making legitimate charges where the bank panics and decides to cancel the card.   All that good fraud detection comes with a price to the card holder.

I've worked on the fraud issue since then.  Other than the fact that it's working on the prevention of a negative to the company, these actually are fun, detective-work projects.  For those who have not had the opportunity, today I decided to share some of the architecture behind fraud prevention utilizing the approach of one of the leading international providers of payment systems, ACI Worldwide (Nasdaq: ACIW) and their product, ACI Proactive Risk Managerâ„¢ 8.0 (PRM). 

As the last step in the authorization process, PRM shares a score with the bank and, based on the tolerance the bank has set for the customer (balancing potential fraud with false positives), the bank's system decides whether to authorize or not.

Although the bank may have a data warehouse, all customer transaction sources feed PRM.  Some customers extend PRM's capabilities to make it their data warehouse.  One year's worth of backlogged transactions is recommended to start with - even though most are legally required to store seven years of data.

PRM makes decisions at the point of authorization based on:

1.     Customer profile - i.e., customers with a $200/day average try to charge $500; customers are also lumped into "peer groups" and charges are expected to conform to the pattern of the group - or else!

2.      Rule basis - the rules are managed by the bank; they may decide taking out the maximum from the ATM a minute before AND after midnight is acceptable for this customer; maybe not

3.      Analytics - detecting a pattern in charges that equate to the PRM database of fraudulent patterns

As a learning system, PRM learns when it has been wrong and tunes accordingly.  Patterns start with the known fraudulent patterns such as small charges at a gas pump followed by a Best Buy shopping spree, and go from there into areas I won't be writing about here.  Some are quite nuanced, reflecting the growing sophistication of both the criminal network and the network detecting the crime.

One of the benefits is sharing learned fraud patterns across the ACI network.  And although ACI brings in third-party, syndicated data, to enhance customer data it does not aggregate customer transactions across the network.

So the cat and mouse game continues into 2011 and, as with many important initiatives, we find information management critical to the solution.


Posted July 13, 2011 12:19 PM
Permalink | 1 Comment |


   VISIT MY EXPERT CHANNEL

Search this blog
Categories ›
Archives ›
Recent Entries ›