We use cookies and other similar technologies (Cookies) to enhance your experience and to provide you with relevant content and ads. By using our website, you are agreeing to the use of Cookies. You can change your settings at any time. Cookie Policy.


Analysis and Amnesia in IT

Originally published February 3, 2010

During my first job in IT, I rose to the position of analyst and was given a plastic HIPO template as a token of my elevation. This was a stencil with flowchart symbols punched into it, which was used to draw data and process flow diagrams of various kinds. The job of an analyst, I was told, was to understand business processes so that they could be automated. And in those distant mainframe days, there were many business processes that had to be automated. "Systems" analysts, as they were then known, had no problem finding work.

Fast forward a few decades, and plastic templates have been replaced by Visio, PowerPoint, and a host of modeling tools. There have been a good number of methodological advances too. However, all this technical progress stands in sharp contrast to what analysts are now doing, because it seems to be rather different than what I was first told they were supposed to do.

The Ages of Automation and Information

When computers were first widely adopted in the economy, business processes were all manual. This was especially true of the parts of enterprises that maintained books and records. These particular processes involved well known rules and a lot of data, and were particularly susceptible to computerization. In those days, applications were all, or very nearly all, built in-house by programmers. Thus, the job of an analyst was typically to understand the manual processes and the data involved in the management of some area of the books and records of the enterprise. And this was always done as part of some larger systems development project that, in those days, always followed the now classic waterfall methodology.

What do we see today? All the manual processes that could be automated were automated long ago. There is still a need to replace transaction applications from time to time, but this is nothing like the pace of conversion of manual processes in the early days. Furthermore, in-house programming for transaction applications has largely been replaced by the purchase, or rental, of packaged applications. In fact, it is a principle in most enterprises to buy or rent before building. It should be no surprise, therefore, that there is far less of the traditional automation work for analysts to do than there was decades ago. Yet, the number of analysts seems to me to be at least the same, and perhaps more, than when I was handed my plastic template all those years ago.

Why is this? With fewer business processes to automate, surely we should need fewer analysts and not more. What exactly are they analyzing these days?

The Challenge for the Modern Analyst

One fertile area is source data analysis. This is the activity needed wherever data is to be integrated, usually for the data warehouses and marts that support business intelligence (BI) solutions. It consists of understanding the structure and content of data sources so that the data can be "cleaned up" and the integration rules derived. Another area for analytic endeavor is still trying to understand processes and business rules. This may be driven not so much by any need to improve processes, but to replace some area of obsolete technology with a something more rational in terms of IT technical infrastructure. These "re-architecting" or "technical conversion" projects are usually more heavily promoted by IT than they are by the users. It is also true that there are changes in the business and the user community can sponsor projects to implement new transaction processing systems. However these still have to support a good deal of the processing that occurred in the older applications, and, again, analysts are needed to precisely understand what is involved.

Now, the work that today's analysts are doing is very important and delivers real value to the enterprise. But at the same time, if we think about it, there is something decidedly odd about IT having to employ people to understand what IT did in the past. Analysts are not trying to understand manual business processes because there are none left to understand. They are trying to understand business processes that have previously been automated by other analysts. The same is true of data. Analysts are looking at source data in databases, not at ledger books containing handwritten entries. The source data exists in databases that previous generations of analysts were responsible for bringing into existence.

It would appear that IT has institutional amnesia. Yet it is also blissfully unaware that there is any kind of problem here. It seems to accept as quite natural the need to maintain large numbers of analysts to perform a kind of continuous digital archaeology. And there is little evidence that this attitude is likely to change anytime soon. Why is this?

The Curse of History

According to the philosophers who have tried to apply metaphysics to the theory of history, we all live and work above a foundation of absolute presuppositions. We rarely question these presuppositions because we are usually not even aware of them. It seems to me probable that the current paradox of unthinkingly employing armies of analysts in IT to discover what IT itself has done in the past must arise from such presuppositions.

The first presupposition seems to arise from the idea that the only unique activity of IT is to build elements of infrastructure, and the classic model is the in-house programming of the early decades of the information age. The systems development life cycle (SDLC) was the new paradigm in those far-off days that rule all thinking today. It is true that different methodologies, such as Agile, have arisen from time to time in the history of IT. However, these have been reactions to the SDLC, and not genuinely original productions. Furthermore, all these methodologies have the same goal as the SDLC, which is to create custom developed ("bespoke" as the Europeans call it) software. The elements of requirements gathering, analysis, design, programming, testing, implementation, and support appear in all of these methodologies. And the SDLC and its rivals make a very compelling case. The unrecognized issue is that the SDLC and its rivals may not be applicable to today's problems in IT. However, "analysis" is called out in all of these methodologies, and thus every IT manager seems to accept that "analysis" or at least the form of analysis conceived of in the SDLC is needed just as much today as it ever was.

A second presupposition seems to follow from the notion that all IT activities are packaged into projects. This is in contrast to IT ever building any permanent infrastructure for itself beyond what it takes to support a project. The chief characteristics of a project are (a) that it ends, and (b) that it produces a defined product. Because a project ends, everyone involved in it can walk away at the end with no remaining responsibilities. Because a project focuses on a defined product, there is little emphasis on the "how" and "why" of the project beyond getting the product to work. The structure of the product may be described in documentation, but that is little more than a map of its internals, and does not give the level of understanding needed for future reengineering. And, of course, documentation is never trusted. Not enough knowledge is captured in a reliable form to mitigate the need for future analysis.

The heritage of the SDLC and always working in projects, are, I would submit, the origin of unstated presuppositions about the need for analysts to constantly tell IT today what it has done previously. What can be done about this situation will be considered in a future article, but we first need to recognize that the problem exists.


  • Malcolm ChisholmMalcolm Chisholm

    Malcolm Chisholm, Ph.D., has more than 25 years of experience in enterprise information management and data management and has worked in a wide range of sectors. He specializes in setting up and developing enterprise information management units, master data management, and business rules. His experience includes the financial, manufacturing, government, and pharmaceutical industries. He is the author of the books: How to Build a Business Rules Engine; Managing Reference Data in Enterprise Databases; and Definition in Information Management. Malcolm writes numerous articles and is a frequent presenter at industry events. He runs the websites http://www.refdataportal.com; http://www.bizrulesengine.com; and
    http://www.data-definition.com. Malcolm is the winner of the 2011 DAMA International Professional Achievement Award.

    He can be contacted at mchisholm@refdataportal.com.
    Twitter: MDChisholm
    LinkedIn: Malcolm Chisholm

    Editor's Note: More articles, resources, news and events are available in Malcolm's BeyeNETWORK Expert Channel. Be sure to visit today!

Recent articles by Malcolm Chisholm

 

Comments

Want to post a comment? Login or become a member today!

Be the first to comment!