Blog: Barry Devlin Subscribe to this blog's RSS feed!

Barry Devlin

As one of the founders of data warehousing back in the mid-1980s, a question I increasingly ask myself over 25 years later is: Are our prior architectural and design decisions still relevant in the light of today's business needs and technological advances? I'll pose this and related questions in this blog as I see industry announcements and changes in way businesses make decisions. I'd love to hear your answers and, indeed, questions in the same vein.

About the author >

Dr. Barry Devlin is among the foremost authorities in the world on business insight and data warehousing. He was responsible for the definition of IBM's data warehouse architecture in the mid '80s and authored the first paper on the topic in the IBM Systems Journal in 1988. He is a widely respected consultant and lecturer on this and related topics, and author of the comprehensive book Data Warehouse: From Architecture to Implementation.

Barry's interest today covers the wider field of a fully integrated business, covering informational, operational and collaborative environments and, in particular, how to present the end user with an holistic experience of the business through IT. These aims, and a growing conviction that the original data warehouse architecture struggles to meet modern business needs for near real-time business intelligence (BI) and support for big data, drove Barry’s latest book, Business unIntelligence: Insight and Innovation Beyond Analytics, now available in print and eBook editions.

Barry has worked in the IT industry for more than 30 years, mainly as a Distinguished Engineer for IBM in Dublin, Ireland. He is now founder and principal of 9sight Consulting, specializing in the human, organizational and IT implications and design of deep business insight solutions.

Editor's Note: Find more articles and resources in Barry's BeyeNETWORK Expert Channel and blog. Be sure to visit today!

Privacy Padlock.pngIn the year since Edward Snowden spoke out on governmental spying, much has been written about privacy but little enough done to protect personal information, either from governments or from big business.

It's now a year since the material gathered by Edward Snowden at the NSA was first published by the Guardian and Washington Post newspapers. In one of a number of anniversary-related items, Vodafone revealed that secret wires are mandated in "about six" of the 29 countries in which it operates. It also noted that, in addition, Albania, Egypt, Hungary, India, Malta, Qatar, Romania, South Africa and Turkey deem it unlawful to disclose any information related to wiretapping or content interception. Vodafone's move is to be welcomed. Hopefully, it will encourage further transparency from other telecommunications providers on governmental demands for information.

However, governmental big data collection and analysis is only one aspect of this issue. Personal data is also of keen interest to a range of commercial enterprises, from telcos themselves to retailers and financial institutions, not to mention the Internet giants, such as Google and Facebook, which are the most voracious consumers of such information. Many people are rightly concerned about how governments--from allegedly democratic to manifestly totalitarian--may use our personal data. To be frank, the dangers are obvious. However, commercial uses of personal data are more insidious, and potentially more dangerous and destructive to humanity. Governments at least purport to represent the people to a greater or lesser extent; commercial enterprises don't even wear that minimal fig leaf.

Take, as one example among many, indoor proximity detection systems based on Bluetooth Low Energy devices such as Apple's iBeacon and Google's rumored upcoming Nearby. The inexorable progress of communications technology--smaller, faster, cheaper, lower power--enables more and more ways of determining the location of your smartphone or tablet and, by extension, you. The operating system or app on your phone requires an opt-in to enable it to transmit your location. However, it is becoming increasingly difficult to avoid opting-in as many apps require it to work at all. More worrying are the systems that record and track without asking permission the MAC addresses of smartphones and tablets that poll public Wi-Fi network routers, which all such devices automatically do. (See, for example, this article, subscription required.) The only way to avoid such tracking is to turn off the device's Wi-Fi receiver. On the desktop, the situation is little better, with Facebook last week joining Google and Yahoo! in ignoring browser "do not track" settings.

It would be simple to blame the businesses involved--both the technology companies that develop the systems and the businesses that buy or use the data. They certainly must take their fair share of responsibility, together with the data scientists and other IT staff involved in building the systems. But the reality is that it is we, the general public, who hand over our personal data without a second thought about its possible uses, who must step up to demanding real change in the collection and use of such data. This demands significant rethinking in at least two areas.

First is the oft-repeated marketing story that "people want more targeted advertising", reiterated again last week by Facebook's Brian Boland. A more nuanced view is provided by Sara M. Watson, a Fellow at the Berkman Center for Internet and Society at Harvard University, in a recent Atlantic article Data Doppelgí¤ngers and the Uncanny Valley of Personalization: "Data tracking and personalized advertising is often described as 'creepy.' Personalized ads and experiences are supposed to reflect individuals, so when these systems miss their mark, they can interfere with a person's sense of self. It's hard to tell whether the algorithm doesn't know us at all, or if it actually knows us better than we know ourselves. And it's disconcerting to think that there might be a glimmer of truth in what otherwise seems unfamiliar. This goes beyond creepy, and even beyond the sense of being watched."

I would suggest that given the choice between less irrelevant advertising or, simply, less advertising on the Web, many people would opt for the latter, particularly given the increasing invasiveness of the data collection needed to drive allegedly more accurate targeting. Clearly, this latter choice would not be in the interest of the advertising industry, a position that crystalizes in the widespread resistance to limits on data gathering, especially in the United States. An obvious first step in addressing this issue is a people-driven, legally mandated move from opt-out data gathering to a formal opt-in approach. To be really useful, of course, this would need to be preceded by a widespread mass deletion of previously gathered data.

This leads directly to the second area in need of substantial rethinking--the funding model for Internet business. Most of us accept that "there's no such thing as a free lunch". But a free email service, Cloud store or search engine, well apparently that's eminently reasonable. Of course, it isn't. All these services cost money to build and run, costs that are covered (with significant profits in many cases) by advertising. More of it and supposedly better targeted via big data and analytics.

There is little doubt that the majority of people using the Internet gain real, daily value from it. Today, that value is paid for through personal data. The loss of privacy seems barely noticed. People I ask are largely disinterested in any possible consequences. However, privacy is the foundation for many aspects of society, including democracy--as can be clearly seen in totalitarian states, where widespread surveillance and destruction of privacy are among the first orders of business. We, the users of the Web, must do the unthinkable: we must demand the right to pay real money for mobile access, search, email and so on in exchange for an end to tracking personal data.

These are but two arguably simplistic suggestions to address issues that have been made more obvious by Snowden's revelations. A more complete theoretical and legal foundation for a new approach is urgently needed. One possible starting point is The Dangers of Surveillance by Neil Richards, Professor of Law at Washington University Law, published in the Harvard Law Review a few short months before Snowden spilled at least some of the beans.

Image courtesy Marc Kjerland

Posted June 19, 2014 12:53 AM
Permalink | No Comments |
Thoughts on the societal impact of the Internet of Things inspired by a unique dashboard product.

VisualCue tile.pngNewcomer to the BBBT, on 2nd May, Kerry Gilger, Founder of VisualCue took the members by storm with an elegant, visually intuitive and, to me at least, novel approach to delivering dashboards. VisualCue is based on the concept of a tile that represents a set of metrics as icons colored according to their state relative to defined threshold values. The main icon in the tile shown here represents the overall performance of a call center agent, with the secondary icons showing other KPIs, such as total calls answered, average handling time, sales per hour worked, customer satisfaction, etc. Tiles are assembled into mosaics, which function rather like visual bar charts that can be sorted according to the different metrics, drilled down to related items and displayed in other formats, including tabular numbers.

Visual Cue Mosaic.jpgThe product seems particularly useful in operational BI applications, with Kerry showing examples from call centers, logistics and educational settings. The response of the BBBT members was overwhelmingly positive. @rick_vanderlans described it as "revolutionary technology", while @gildardorojas asked "why we didn't have before something as neat and logical?" @marcusborba opined "@VisualCue's capability is amazing, and the data visualization is gorgeous!"

So, am I being a Luddite, or even a curmudgeon, to have made the only negative comments of the call? My concern was not about the product at all, but rather around the power it unleashes simply by being so good at what it does. Combine this level of ease-of-use in analytics with big data and, especially, data from the Internet of Things, and we take a quantum leap from measurement to invasiveness, from management to Big-Brother-like control.

Each of the three example use cases described by Gilger provided wonderful examples of real and significant business benefit; but, taken together, they also opened up appalling possibilities of abuse of privacy, misappropriation of personal information and disempowerment of the people involved. I'll briefly explore the three examples, realizing that in the absence of the full story, I'm undoubtedly imagining some aspects. Nor is this about VisualCue (who tweeted that "Privacy is certainly a critical issue! We focus on presenting data that an organization already has--maybe we make it obvious") or the companies using it; it's meant to be a warning that we who know some of the possibilities--positive and negative--offered by big data analytics must consider in advance the unintended consequences.

Detailed monitoring of call center agents' performance is nothing new. Indeed, it is widely seen as best practice and key to improving both individual and overall call center results. VisualCue, according to Gilger, has provided outstanding performance gains, including one center where agents in competition with peers have personally sought out training to improve their own metrics, something that is apparently unheard of in the industry. Based on past best practices and detailed knowledge of where the agent is weak, VisualCue can provide individually customized advice. In a sense, this example illustrates the pinnacle of such use of monitoring data and analytics to drive personnel performance. But, within it lies the seeds of its own destruction. As the agent's job is more and more broken down into repeatable tasks, each measurable by a different metric, human innovation and empathy is removed and the job prepared for automation. In fact, a 2013 study puts at 99% the probability that certain call center jobs, particularly telemarketing, will be soon eliminated by technology.

The old adage "what you can't measure, you can't manage" is at the heart of traditional BI. In an era when data was scarce and often incoherent, this focus makes sense. However, applying it to all aspects of life today is, to me, ethically problematical. The example of monitoring the entire scope of an educational institution in a single dashboard--from financials through administration to student performance--is a case where our ability to analyze so many data points leads to the illusion that we can manage the entire process mechanically. The Latin root of "educate" means "to draw forth" from the student, the success of which simply cannot be gauged through basic numerical measures, and is certainly not correlated with the business measures of the institution.

vehtrack.jpgThe final example of tracking the operational performance of a waste management company's routes, trucks and drivers emphasizes our growing ability to measure and monitor the details of real life minute by minute. By continuously tracking the location and engine management signals from its trucks, the dashboard created by this company enabled it to make significant financial savings and improvements to its operational performance. However, it also enables supervisors to drill into the ongoing behavior of the company's drivers: deviations from planned routes, long stops with the engine running, extreme braking, exceeding the speed limit, etc. While presumably covered by their employment contract, such micromanagement of employees is at best disempowering and at worst open to abuse by increasingly all-seeing supervisors. Of much greater concern is the fact that these sensors are increasingly embedded in private automobiles and that such tracking capability is already being applied without owners' consent to smartphones. As far as a year back, Euclid Analytics had already tracked about 50 million devices in 4,000 locations according to a New York Times blog.

1984-big-brother-is-watching-you.jpgI'm grateful to Kerry Gilger for sharing the use cases that inspired my speculations above. Of course, my point is beyond the individual companies involved and products used. At issue is the range of social and ethical dilemmas raised by the rapid advances in sensor technology, data gathered and the power of analytic software. Our every action online is already monitored by the likes of Google and Facebook for profit and by organizations like the NSA allegedly for security and crime prevention. The level of monitoring of our physical lives is now rapidly increasing. Anonymity is rapidly disappearing, if not already extinct. Our personal privacy rights are being usurped by the data gathering and analysis programs of these commercial and governmental organizations, as eloquently described by Shoshana Zuboff of Harvard Business and Law schools in a recent article in Frankfurter Allgemeine Zeitung.

It is imperative that those of us who have grown up with and nurtured business intelligence over the past three decades--from hardware and software vendors, to consultants and analysts, to BI managers and implementers in businesses everywhere--begin to deeply consider the ethical, legal and societal issues now being raised and take action to guide the industry and society appropriately through the development of new codes of ethical behavior and use of information, and input to national and international legislation.


Posted May 4, 2014 6:26 AM
Permalink | No Comments |
Thoughts on the societal impact of the Internet of Things inspired by a unique dashboard product.

VisualCue tile.pngNewcomer to the BBBT, on 2nd May, Kerry Gilger, Founder of VisualCue took the members by storm with an elegant, visually intuitive and, to me at least, novel approach to delivering dashboards. VisualCue is based on the concept of a tile that represents a set of metrics as icons colored according to their state relative to defined threshold values. The main icon in the tile shown here represents the overall performance of a call center agent, with the secondary icons showing other KPIs, such as total calls answered, average handling time, sales per hour worked, customer satisfaction, etc. Tiles are assembled into mosaics, which function rather like visual bar charts that can be sorted according to the different metrics, drilled down to related items and displayed in other formats, including tabular numbers.

Visual Cue Mosaic.jpgThe product seems particularly useful in operational BI applications, with Kerry showing examples from call centers, logistics and educational settings. The response of the BBBT members was overwhelmingly positive. @rick_vanderlans described it as "revolutionary technology", while @gildardorojas asked "why we didn't have before something as neat and logical?" @marcusborba opined "@VisualCue's capability is amazing, and the data visualization is gorgeous!"

So, am I being a Luddite, or even a curmudgeon, to have made the only negative comments of the call? My concern was not about the product at all, but rather around the power it unleashes simply by being so good at what it does. Combine this level of ease-of-use in analytics with big data and, especially, data from the Internet of Things, and we take a quantum leap from measurement to invasiveness, from management to Big-Brother-like control.

Each of the three example use cases described by Gilger provided wonderful examples of real and significant business benefit; but, taken together, they also opened up appalling possibilities of abuse of privacy, misappropriation of personal information and disempowerment of the people involved. I'll briefly explore the three examples, realizing that in the absence of the full story, I'm undoubtedly imagining some aspects. Nor is this about VisualCue (who tweeted that "Privacy is certainly a critical issue! We focus on presenting data that an organization already has--maybe we make it obvious") or the companies using it; it's meant to be a warning that we who know some of the possibilities--positive and negative--offered by big data analytics must consider in advance the unintended consequences.

Detailed monitoring of call center agents' performance is nothing new. Indeed, it is widely seen as best practice and key to improving both individual and overall call center results. VisualCue, according to Gilger, has provided outstanding performance gains, including one center where agents in competition with peers have personally sought out training to improve their own metrics, something that is apparently unheard of in the industry. Based on past best practices and detailed knowledge of where the agent is weak, VisualCue can provide individually customized advice. In a sense, this example illustrates the pinnacle of such use of monitoring data and analytics to drive personnel performance. But, within it lies the seeds of its own destruction. As the agent's job is more and more broken down into repeatable tasks, each measurable by a different metric, human innovation and empathy is removed and the job prepared for automation. In fact, a 2013 study puts at 99% the probability that certain call center jobs, particularly telemarketing, will be soon eliminated by technology.

The old adage "what you can't measure, you can't manage" is at the heart of traditional BI. In an era when data was scarce and often incoherent, this focus makes sense. However, applying it to all aspects of life today is, to me, ethically problematical. The example of monitoring the entire scope of an educational institution in a single dashboard--from financials through administration to student performance--is a case where our ability to analyze so many data points leads to the illusion that we can manage the entire process mechanically. The Latin root of "educate" means "to draw forth" from the student, the success of which simply cannot be gauged through basic numerical measures, and is certainly not correlated with the business measures of the institution.

vehtrack.jpgThe final example of tracking the operational performance of a waste management company's routes, trucks and drivers emphasizes our growing ability to measure and monitor the details of real life minute by minute. By continuously tracking the location and engine management signals from its trucks, the dashboard created by this company enabled it to make significant financial savings and improvements to its operational performance. However, it also enables supervisors to drill into the ongoing behavior of the company's drivers: deviations from planned routes, long stops with the engine running, extreme braking, exceeding the speed limit, etc. While presumably covered by their employment contract, such micromanagement of employees is at best disempowering and at worst open to abuse by increasingly all-seeing supervisors. Of much greater concern is the fact that these sensors are increasingly embedded in private automobiles and that such tracking capability is already being applied without owners' consent to smartphones. As far as a year back, Euclid Analytics had already tracked about 50 million devices in 4,000 locations according to a New York Times blog.

1984-big-brother-is-watching-you.jpgI'm grateful to Kerry Gilger for sharing the use cases that inspired my speculations above. Of course, my point is beyond the individual companies involved and products used. At issue is the range of social and ethical dilemmas raised by the rapid advances in sensor technology, data gathered and the power of analytic software. Our every action online is already monitored by the likes of Google and Facebook for profit and by organizations like the NSA allegedly for security and crime prevention. The level of monitoring of our physical lives is now rapidly increasing. Anonymity is rapidly disappearing, if not already extinct. Our personal privacy rights are being usurped by the data gathering and analysis programs of these commercial and governmental organizations, as eloquently described by Shoshana Zuboff of Harvard Business and Law schools in a recent article in Frankfurter Allgemeine Zeitung.

It is imperative that those of us who have grown up with and nurtured business intelligence over the past three decades--from hardware and software vendors, to consultants and analysts, to BI managers and implementers in businesses everywhere--begin to deeply consider the ethical, legal and societal issues now being raised and take action to guide the industry and society appropriately through the development of new codes of ethical behavior and use of information, and input to national and international legislation.


Posted May 4, 2014 6:26 AM
Permalink | No Comments |
eco-skyscraper-by-vikas-pawar-2a.jpgIn an era of "big data this" and "Internet of Things that", it's refreshing to step back to some of the basic principles of defining, building and maintaining data stores that support the process of decision making... or data warehousing, as we old-fashioned folks call it. Kalido did an excellent job last Friday of reminding the BBBT just what is needed to automate the process of data warehouse management. But, before the denizens of the data lake swim away with a bored flick of their tails, let me point out that this matters for big data too--maybe even more so. I'll return to this towards the end of this post.

In the first flush of considering a BI or analytics opportunity in the business and conceiving a solution that delivers exactly the right data needed to address that pesky problem, it's easy to forget the often rocky road of design and development ahead. More often forgotten, or sometimes ignored, is the ongoing drama of maintenance. Kalido, with their origins as an internal IT team solving a real problem for the real business of Royal Dutch Shell in the late '90s, have kept these challenges front and center.

All IT projects begin with business requirements, but data warehouses have a second, equally important, staring point: existing data sources. These twin origins typically lead to two largely disconnected processes. First, there is the requirements activity often called data modeling, but more correctly seen as the elucidation of a business model, consisting of function required by the business and data needed to support it. Second, there is the ETL-centric process of finding and understanding the existing sources of this data, figuring out how to prepare and condition it, and designing the physical database elements needed to support the function required.

Most data warehouse practitioners recognize that the disconnect between these two development processes is the origin of much of the cost and time expended in delivering a data warehouse. And they figure out a way through it. Unfortunately, they often fail to recognize that each time a new set of data must be added or an existing set updated, they have to work around the problem yet again. So, not only is initial development impacted, but future maintenance remains an expensive and time-consuming task. An ideal approach is to create an integrated environment that automates the entire set of tasks from business requirements documentation, through the definition and execution of data preparation, all the way to database design and tuning. Kalido is one of a small number of vendors who have taken this all-inclusive approach. They report build effort reductions of 60-85% in data warehouse development.

Conceptually, we move from focusing on the detailed steps (ETL) of preparing data to managing the metadata that relates the business model to the physical database design. The repetitive and error-prone donkey-work of ETL, job management and administration is automated. The skills required in IT change from programming-like to modeling-like. This has none of the sexiness of predictive analytics or self-service BI. Rather, it's about real IT productivity. Arguably, good IT shops always create some or all of this process- and metadata-management infrastructure themselves around their chosen modeling, ETL and database tools. Kalido is "just" a rather complete administrative environment for these processes.

Which brings me finally back to the shores of the data lake. As described, the data lake consists of a Hadoop-based store of all the data a business could ever need, in its original structure and form, and into which any business user can dip a bucket and retrieve the data required without IT blocking the way. However, whether IT is involved or not, the process of understanding the business need and getting the data from the lake into a form that is useful and usable for a decision-making requirement is exactly identical to that described in my third paragraph above. The same problems apply. Trust me, similar solutions will be required.

Image: http://inhabitat.com/vikas-pawar-skyscraper/



Posted March 17, 2014 5:33 AM
Permalink | No Comments |
eco-skyscraper-by-vikas-pawar-2a.jpgIn an era of "big data this" and "Internet of Things that", it's refreshing to step back to some of the basic principles of defining, building and maintaining data stores that support the process of decision making... or data warehousing, as we old-fashioned folks call it. Kalido did an excellent job last Friday of reminding the BBBT just what is needed to automate the process of data warehouse management. But, before the denizens of the data lake swim away with a bored flick of their tails, let me point out that this matters for big data too--maybe even more so. I'll return to this towards the end of this post.

In the first flush of considering a BI or analytics opportunity in the business and conceiving a solution that delivers exactly the right data needed to address that pesky problem, it's easy to forget the often rocky road of design and development ahead. More often forgotten, or sometimes ignored, is the ongoing drama of maintenance. Kalido, with their origins as an internal IT team solving a real problem for the real business of Royal Dutch Shell in the late '90s, have kept these challenges front and center.

All IT projects begin with business requirements, but data warehouses have a second, equally important, staring point: existing data sources. These twin origins typically lead to two largely disconnected processes. First, there is the requirements activity often called data modeling, but more correctly seen as the elucidation of a business model, consisting of function required by the business and data needed to support it. Second, there is the ETL-centric process of finding and understanding the existing sources of this data, figuring out how to prepare and condition it, and designing the physical database elements needed to support the function required.

Most data warehouse practitioners recognize that the disconnect between these two development processes is the origin of much of the cost and time expended in delivering a data warehouse. And they figure out a way through it. Unfortunately, they often fail to recognize that each time a new set of data must be added or an existing set updated, they have to work around the problem yet again. So, not only is initial development impacted, but future maintenance remains an expensive and time-consuming task. An ideal approach is to create an integrated environment that automates the entire set of tasks from business requirements documentation, through the definition and execution of data preparation, all the way to database design and tuning. Kalido is one of a small number of vendors who have taken this all-inclusive approach. They report build effort reductions of 60-85% in data warehouse development.

Conceptually, we move from focusing on the detailed steps (ETL) of preparing data to managing the metadata that relates the business model to the physical database design. The repetitive and error-prone donkey-work of ETL, job management and administration is automated. The skills required in IT change from programming-like to modeling-like. This has none of the sexiness of predictive analytics or self-service BI. Rather, it's about real IT productivity. Arguably, good IT shops always create some or all of this process- and metadata-management infrastructure themselves around their chosen modeling, ETL and database tools. Kalido is "just" a rather complete administrative environment for these processes.

Which brings me finally back to the shores of the data lake. As described, the data lake consists of a Hadoop-based store of all the data a business could ever need, in its original structure and form, and into which any business user can dip a bucket and retrieve the data required without IT blocking the way. However, whether IT is involved or not, the process of understanding the business need and getting the data from the lake into a form that is useful and usable for a decision-making requirement is exactly identical to that described in my third paragraph above. The same problems apply. Trust me, similar solutions will be required.

Image: http://inhabitat.com/vikas-pawar-skyscraper/



Posted March 17, 2014 5:33 AM
Permalink | No Comments |

   VISIT MY EXPERT CHANNEL

Search this blog
Categories ›
Archives ›
Recent Entries ›