Blog: Wayne Eckerson Subscribe to this blog's RSS feed!

Wayne Eckerson

Welcome to Wayne's World, my blog that illuminates the latest thinking about how to deliver insights from business data and celebrates out-of-the-box thinkers and doers in the business intelligence (BI), performance management and data warehousing (DW) fields. Tune in here if you want to keep abreast of the latest trends, techniques, and technologies in this dynamic industry.

About the author >

Wayne has been a thought leader in the business intelligence field since the early 1990s. He has conducted numerous research studies and is a noted speaker, blogger, and consultant. He is the author of two widely read books: Performance Dashboards: Measuring, Monitoring, and Managing Your Business (2005, 2010) and The Secrets of Analytical Leaders: Insights from Information Insiders (2012).

Wayne is currently director of BI Leadership Research, an education and research service run by TechTarget that provides objective, vendor neutral content to business intelligence (BI) professionals worldwide. Wayne’s consulting company, BI Leader Consulting, provides strategic planning, architectural reviews, internal workshops, and long-term mentoring to both user and vendor organizations. For many years, Wayne served as director of education and research at The Data Warehousing Institute (TDWI) where he oversaw the company’s content and training programs and chaired its BI Executive Summit. He can be reached by email at weckerson@techtarget.com.

March 2012 Archives

Imagine this: Would Google have built the predecessor to Hadoop in the mid-2000s if IBM's InfoSphere Streams (a.k.a. "IBM Streams") had been available? Since IBM Streams can ingest tens of thousands to millions of discrete events a second, perform complex transformations and analytics on those events with sub-second latency, why would Google have bothered to invest the man-hours into building a home-grown, distributed system to build its Web indexes?

Like Hadoop, IBM Streams runs on a cluster of commodity servers and parallelizes programming logic and handles node outages, relieving developers from having to worry about many of these low-level tasks. Moreover, contrary to what some may think about Big Blue products, Streams works on just about any data, including text, images, audio, voice, VoIP, video, web traffic, email, GPS data, financial transaction data, satellite data, and sensors.

Ok. I suspect Google has a "not invented here" mentality and needs to put its oodles of Java, Python, and C programmers to work doing something innovative to harness the massive reams of data that it collects daily from its sprawling Web and communications empire. And since it was an internet startup at the time, Google probably didn't want to pay for commercial software, and probably still doesn't. (Google was only six years old when it began developing Big Table, a predecessor to Hadoop, in 2004.) An entry-level implementation of Streams will set you back about $300,000, according to Roger Rea, Streams Product Manager at IBM.

Origins of CEP

I suppose you could argue that Hadoop inspired Streams. But that's probably not true. Streams emanates from a rather esoteric domain of computing, known as Complex Event Processing (CEP), which has been around for more than two decades and a major focus of a sizable amount of academic research. In fact, reknowned database guru and MIT professor, Michael Stonebraker, threw his hat into the CEP ring in 2003 when he commercialized technology that he had been working on with colleagues from several other universities. His company, StreamBase Systems, was actually a latecomer to the CEP landscape, preceded by companies, such as Tibco, Sybase, Progress Software, Oracle, Microsoft, and Informatica, all of which have developed CEP software inhouse or acquired it from startups.

CEP technology is about to move from the backwaters of data management technology to center stage. The primary driver is Big Data, which is at the height of the hype-cycle today. Open source technologies, such as Hadoop, have finally made it cost-effective for companies not only to amass mountains of Web and other data, but do something valuable with it. And no data is off limits: Twitter feeds, smart meter data, sensor feeds, video surveillance, systems logs, as well as voluminous transaction data. Much of this data is so big that you have to process it in real-time or you can never catch up.

Unfortunately, Hadoop is very young technology at this stage. It's also batch-oriented. That means you have to dump big, static files of data into a Hadoop cluster and then launch another big job to process or query the data. (Apache does support a project called Flume that is supposed to stream Web log data into Hadoop but early users report it doesn't work very well.)

Data in Motion

But what if you could process data as events happen? In other words, analyze data in motion instead of data at rest? This is where CEP technologies come into play.

Say you are a manager at a telecommunications company who wants to count the number of dropped calls each day, track customer calling patterns, and identify individuals with pre-paid calling plans who might churn. Every night, you could dedupe and dump all six billion of your company's call detail records (CDR) into your Hadoop cluster. Then you could issue queries against that entire data set to calculate the summaries and run the churn model. Given the volume of data, it might take more than 12 hours to process everything and by then it would be two days since the calls were made.

But if our telecommunications manager had a CEP system, he wouldn't have to load anything or run massive queries to get the answers he wants. He would create some rules and point his CEP engine at the CDR event stream and let it work its magic. The CEP system would first dedupe the data as it comes in by checking each incoming CDR against billions of existing CDRs in a data warehouse. It would then calculate a running summary of dropped calls, summarize call activity by customer, compute the churn model, and deposit the summaries into a SQL database. And it would do all that work in a fraction of a second per event record. A marketing manager could monitor the data on a real-time dashboard and send promotional offers to customers on a prepaid plan who are likely to churn within minutes of making their final call.

Now, if that isn't powerful computing, I'm not sure what is. That's certainly worth $300,000 or even ten times that amount for an enterprise deployment like I've just described. Google be damned!

CEP Use Cases

In a broad sense, CEP software creates a sophisticated notification system that works on high-volume streams of incoming data. You use it to detect anomalies and issue alerts. Fraud detection systems are a classic example of CEP systems in action. But, in reality, CEP offers more value than just pure notification. In fact, in the age of Big Data, other use cases may come to the forefront and even give Hadoop a run for its money.

According to Neil McGovern who heads worldwide strategy at Sybase, CEP has four use cases:


  1. Situational Detection. This is the traditional use case in which CEP applies calculations and rules to streams of incoming data and identifies exceptions or anomalies.

  2. Automated Response. This is an extension of situation detection in which CEP automatically takes predefined actions in response to an event or combination of events that exceeds thresholds.

  3. Stream Transformation. Here, CEP transforms incoming events to offload the processing burden from Hadoop, ETL tools, or data warehouses. In essence, CEP becomes the transformation layer in an enterprise data environment. It can filter, dedupe, and calculate data, including running data mining algorithms on a record by record basis.

  4. Continuous Intelligence. Here, CEP powers a real-time dashboard environment that enables managers or administrators to keep their fingers on the pulse of an organization or mission-critical process.

In many applications, CEP supports all four use cases at once. Certainly, in the era of Big Data, companies would be wise to implement CEP technology as a stream transformation engine that minimizes the size of data they have to land in Hadoop or a data warehouse. This would reduce their hardware footprint and software execution costs. Even though its commercial software, CEP products would provide high ROI in a Big Data environment.

CEP is a technology offers many valuable uses and is currently being adopted by leading edge companies. SAP plans to embed Sybase's CEP engine in all of its applications. So, if you are an SAP user, you'll be benefiting from CEP whether you know it or not. If you are a BI architect, it's time that you gave it a look and see how it can streamline your existing data processing and analytical operations.


Posted March 26, 2012 12:42 PM
Permalink | No Comments |

The social media behemoth, Facebook, is expected to be worth $100 billion when it goes public this spring, making it the largest initial public offering (IPO) for an internet company in history. Not bad for a company projected to make about $3 billion in 2011.

The hullabaloo surrounding Facebook's IPO underscores the two sides of being the world's biggest social network. On one hand, by concentrating hundreds of millions of people on a single social media platform, Facebook offers a tantalizing opportunity for advertisers to deliver highly targeted marketing campaigns through a bevy of rich, social applications. On the other, by giving advertisers unparalleled access to people's personal and activity data, Facebook has become the lightening rod in the debate about the proper balance between openness and privacy on the social internet.

A Marketer's Dream

Facebook is a marketer's dream come true. With more than 850 million monthly active users who generate more than 2.7 billion likes and comments a day, Facebook is a treasure trove of continuously updated, highly personalized customer data. Why would a company spend $100 million or more on a customer relationship management (CRM) system, whose data has a half-life of 36 months, if it can tap Facebook's rich set of demographic, psychographic, activity, location, and social network data? Why should it build custom campaigns via email, direct mail, or traditional media if it can use Facebook as a delivery channel for highly targeted offers? This is a no-brainer!

To date, Facebook's efforts to make this incredible information asset accessible to advertising partners have been somewhat disappointing. Currently, marketers can set up their own Facebook pages and communicate with people who friend them, which provide interactivity but are not very targeted. Or they can purchase Facebook display ads, which are targeted but not very interactive.

Facebook Applications. However, the newest Facebook channel for advertisers is the most promising: custom applications built on Facebook's open application programming interfaces (APIs). Many companies have already built Facebook applications and games that provide people with highly personalized content in exchange for their "tokens."

Tokens are the keys to unlocking peoples' Facebook data. A token is a user's permission to access their data. It's the ultimate opt-in mechanism, and the key to making Facebook applications work. Once a marketer has your token, it can collect everything about you and your friends. To be fair, applications must explicitly request permission to access your data, specifying the content they want extract. (See figure 1.) As long as marketers have your token, they can extract your data indefinitely and build a rich, historical profile about you.

Figure 1. Facebook Application Token
Facebook 1.jpg
This is a typical opt-in screen that people see when they activate a Facebook application.

With a token in hand, marketers can request to collect, store, and use any of the user's information held by Facebook. And that's quite a lot of stuff. The available data includes:

  • Demographic and psychographic information users write about themselves in their profile:
    • This includes name, gender, birthday, relationship status, friends, religion, political views, hometown, schools attended, current and past occupations, family members, current location, religious and political views, contact information, including phone, address and email, friends, IP address, and user name.
  • Activity data about what you do on the site:
    • This includes likes/dislikes, status updates, music, photos, videos, links, notes, Facebook applications you've opted into, places you've visited, events you've attended, and basically everything you've posted, linked to, or responded to on Facebook.
  • Demographic and activity data about your friends

This rich set of information is far more descriptive and useful than what exists in most CRM databases today. It's tremendously valuable to marketers, especially those who work in large consumer-oriented organizations who want and need to deliver highly targeted messages to customers and offer better customer service. The best part about the data is that Facebook users keep it current themselves. And if they don't, the social dynamic on Facebook often shames them into correcting inaccurate or intentionally misleading data. With Facebook, marketers can collect customer data without having to pay millions of dollars to cleanse, scrub, and update that data on a regular basis.

Why Share? The socially paranoid might ask why Facebook users willingly hand over so many personal tidbits to Facebook and its application partners. The upside is pretty obvious. For one, they enjoy the social experience on Facebook and want to replicate it on other sites. Second, they want these sites to leverage information they've already entered into Facebook, including their log-on information, so they don't have to re-educate each new site about themselves and their preferences. And last, and most important, Facebook and its partners give them stuff they want.

For instance, Hallmark has a Facebook application called Social Calendar that collects your friends' dates of birth so it can remind you to send them personalized greetings and virtual goods on their birthdays. American Express has an application called "Link>Like>Love" which delivers couponless offers from its partners tailored to your interests gleaned from Facebook that you can redeem online with your American Express card and share with your friends. (See figure 1.) This is social computing at its best. Companies tailor services to you and your friends based on your personal profile, interests, and ongoing activities.

Privacy Concerns

But not everyone thinks that personalized offers are worth sacrificing your personal privacy. With most Facebook applications, the information exchange is an all or nothing proposition. People must cede all their information to the provider or they can't use the application. In a marketer's calculus, this is a rational exchange. People provide their personal information and marketers give them highly tailored products and services. Hundreds of millions of Facebook users seem to agree.

But it's unclear how many of these people truly comprehend the amount of data that marketers collect about them and the frequency with which they collect it. Moreover, it's a fair bet that most people don't understand that opting into a Facebook application gives marketers instant access to detailed, personal information about their Facebook friends. All of them.

The Multiplier Effect. Since the average Facebook user has 130 friends, each token that a marketer receives gets magnified a hundredfold or more. Some savvy, consumer-oriented companies have already amassed detailed personal information about millions of people with just tens of thousands of tokens. Some of these companies use statistical techniques to enrich Facebook data with salary and psychographic information and then combine it with existing customer data in CRM systems. The result is that corporations can now gather detailed information about large numbers of their customers and prospects. This is a primary reason for Facebook's gravity-defying IPO valuation.

Although the socially paranoid are horrified by this wanton aggregation of personal data in the name of commerce, I'm a bit more sanguine. Currently, it takes a lot of technical sophistication to collect and analyze these vast amounts of customer points, let alone use them effectively in corporate marketing campaigns. And, truth be told, we want companies to excel at using our data so they can deliver personalized offers of interest to us. Why blanket the market with irrelevant appeals that we tune out?

But privacy advocates counter that governments, insurance companies, and hackers might be able to access this information, exposing the minute details of our lives to people we'd rather not have nosing around in our affairs. They have a point. But you can't have perfect privacy within the context of social media. People engage with social media because they want to share information with others. Those who wish to remain private, should not participate. But this doesn't mean we have to jettison privacy entirely. The market clearly wants Facebook and its partners to strike a balance: they want a social experience that gives them an assurance of privacy and a degree of control.

Facebook Privacy Controls. In the past, Facebook has taken a public whipping for its lack of privacy controls. Today, Facebook still comes under attack, but it does a much better job managing privacy than most of its internet peers, such as Google, which is the undisputed king of activity tracking. Google recently changed its privacy policy so that it can consolidate customer information and activity across its sprawling set of internet domains, including Google Search, Google+, YouTube, Gmail, Google Maps, and Google Apps. And since Google provides the operating system on Android devices, it can now track our every movement and conversation via our smartphones. (To learn how Google tracks your online behavior, read Patricia Seybold's excellent report titled, "How Does Google's Privacy Policy Affect You?") Other internet, media, and communications companies offer fewer privacy controls than Facebook, yet paradoxically have largely escaped unwanted attention about their use of personal information, although Google is starting to feel the heat, as it should.

For its part, Facebook gives users minute control over every aspect of their privacy. If I'm a savvy Facebook user, I can uncheck all the items I want to keep out of the hands of Facebook marketers when my friends opt-in to their applications. (See figure 2.) But unfortunately, the fine print reads, "If you don't want apps and Web sites to access other categories of information (like your friends list, gender, or info you've made public) you can turn of all Platform apps." Huh? To really prevent application marketers from getting your information through friends, you can't use Facebook applications at all. That seems a little Draconian, an example of a binary privacy policy--either on or off. People should be able to block individual applications from accessing their data via their friends' tokens. If you can do this, I've missed it.

Figure 2. Facebook Privacy Settings for Applications
Figure 2 - facebook.jpg
This overlay dialogue box shows how people can control the information applications can access through their friends. The fine print at the bottom says that you need to turn off the application Platform entirely to prevent public information, including your friend list, from being captured.

Tacit versus Explicit Approval. Although Facebook's privacy controls give users the ability to determine what personal data Facebook partners can access through a friend's token, it's not an explicit consent. In other words, people aren't notified at the moment a marketer gains access to their data. Rather, users give blanket permission to all marketers based on the settings configured in Facebook's privacy pages. But for most people, this approval is a default setting--they never consciously configure the controls. In other words, Facebook users give tacit, not explicit, approval to marketers to mine their information. As a result, most people don't realize that their friends are giving away their personal information.

Facebook should bite the bullet and require partner applications to explicitly request friends' permission to gather their data at the time they acquire a token. They should also require partners to indicate that they can collect this data perpetually. This will take courage because explicit approvals disrupt the freeflow of information and make the applications less appealing. People might get annoyed with repeated requests for access; marketers won't get as much data about people's friends; and companies will have to work harder to code and manage the applications. But some partners have already stepped up to the plate and do this voluntarily. For example, Hallmark sends an email to each of your friends when you subscribe to its Social Calendar application that requests permission to access their dates of birth .

Simplify Privacy. Facebook can also make its privacy settings easier to access and use. Currently, people have to hit a small down arrow on the home page to access account and privacy settings. Since the arrow doesn't have a label, it almost seems as if Facebook doesn't want people to find these settings. Furthermore, the privacy tab contains 40 checkboxes spread across 10 different screens, half of which deal with Facebook applications. Although the layout and text of these screens is simple and easy to understand, asking people to navigate ten screens and pick the right settings is too much. And not all settings are intuitive, especially for new and less active Facebook users. Did Facebook intentionally make its privacy pages complex to use to discourage people from changing the default settings?

If it is just poor design, there's an easy fix. For instance, I'd like to see Facebook create a small graphical privacy widget that runs on people's home pages and lets them choose from three privacy settings, ranging from "Most Private" to "Most Public." The widget would let people move a graphical slider up or down to see what personal information gets blocked or made public in each setting. This is what Internet Explorer does to help people define their Web security settings, and I think it's effective. The widget would also link to Facebook's current privacy controls so people can customize the settings further.

Summary.

Facebook has revolutionized how we use the internet to interact with each other and corporate entities. By consolidating hundreds of millions of people on a single social media platform, Facebook has unlimited potential to make money as a medium for advertising and targeted marketing. But, Facebook also has a responsibility to protect users from the over-exuberant use of personal information by advertisers and marketers. Balancing the demands of marketers with the rights of consumers will be a major challenge for Facebook as it strives to achieve its lofty IPO valuation in the coming years.


Posted March 19, 2012 12:21 PM
Permalink | 1 Comment |

Informatica this week inscribed another notch in its Big Data belt by inking a partnership agreement with MapR, one of the leading Hadoop distributions in the marketplace. The partnership further opens Hadoop to the sizable market of Informatica developers and provides a visual development environment for creating and running MapReduce jobs.

The partnership is fairly standard by Hadoop terms. Informatica can connect to MapR via PowerExchange and apply PowerCenter functions to the extracted data, such as data quality rules, profiling functions, and transformations. Informatica also provides HParser, a visual development environment for parsing and transforming Hadoop data, such as logs, call detail records, and JSON documents. Informatica has already signed similar agreements with Cloudera and HortonWorks.

Deeper Integration. But Informatica and MapR have gone two steps beyond the norm. Because MapR's unique architecture bundles an alternate file system (Network File System) behind industry standard Hadoop interfaces, Informatica has integrated two additional products with MapR: Ultra Messaging and Fast Clone. Ultra Messaging enables Informatica customers to stream data into MapR, while Fast Clone enables them replicate data in bulk. In addition, MapR will bundle the community edition of Informatica's HParser, the first Hadoop distribution to do so.

The upshot is that Informatica developers can now leverage a good portion of Informatica's data integration platform with MapR's distribution of Hadoop. Informatica is expected to announce the integration of additional Informatica products with MapR later this spring.

The two companies are currently certifying the integration work, which be finalized by end of Q1, 2012.


Posted March 6, 2012 12:51 PM
Permalink | No Comments |

The hype and reality of the Big Data movement was on full display this week at Strata Conference in Santa Clara, California. With a sold-out show of 2,000+ attendees and 40+ sponsors, the conference was the epicenter of all things Hadoop and NoSQL--technologies which are increasingly gaining a foothold in corporate computing environments.

Most of the leading Hadoop distributions--Cloudera, Hortonworks, EMC Greenplum, and MapR--already count hundreds of customers. And it's clear that Big Data has moved from the province of Internet and media companies with large Web properties to nearly every industry. Strata speakers described compelling Big Data applications in energy, pharmaceuticals, utilities, financial services, insurance, and government.

For example, IBM has 200 customers using or testing its BigInsights Hadoop distribution, according to Anjul Bhambhri, vice president of Big Data at Big Blue. One IBM customer, Vestas Wind Systems, a leading wind turbine maker, uses BigInsights to model larger volumes of weather data so it can pinpoint the optimal placement of wind turbines. And a financial services customer uses BigInsights to improve the accuracy of its fraud models by addressing much larger volumes of transaction data.

Big Data Drivers

Hadoop clearly fills an unmet need in many organizations. Given its open source roots, Hadoop provides a more cost effective way to analyze large volumes of data compared to traditional relational database management systems (RDBMS). It's also better suited to processing unstructured data, such as audio, video, or images, and semi-structured data, such as Web log data for tracking customer behavior on social media sites. For years, leading-edge companies have struggled in vain to figure out an optimal way to analyze this type of data in traditional data warehousing environments, but without much luck. (See "Let the Revolution Begin: Big Data Liberation Theology.")

Finally, Hadoop is a load-and-go environment: administrators can dump the data into Hadoop without having to convert it into a particular structure. Then, users (or data scientists) can analyze the data using whatever tools they want, which today are typically languages, such as Java, Python, and Ruby. This type of data management paradigm appeals to application developers and analysts, who often feel straitjacketed by top-down, IT-driven architectures and SQL-based toolsets. (See "The New Analytical Ecosystem: Making Way for Big Data.")

Speed Bumps

But Hadoop is not a data management panacea. It's clearly at or near the apogee of its hype cycle right now, and its many warts will disillusion all but bleeding- and leading-edge adopters.

For starters, Hadoop is still very green behind the ears. The Apache Foundation just released the equivalent of version 1.0. So there are plenty of basic things missing from the environment--like security, a metadata catalog, data quality, backups, and monitoring and control. Moreover, it's a batch processing environment that is not terribly efficient in the way it exploits a clustered environment. Hadoop knock-offs, like MapR, which embed proprietary technology underneath Hadoop APIs claim up to five-fold faster performance on half as many nodes.

In addition, to actually run a Hadoop environment, you need to get software from a mishmash of Apache projects, with razzle dazzle names like Flume, Sqoop, Ooze, Pig, Hive, and Zookeeper. These independent projects often contain competing functionality, have separate release schedules, and aren't always tightly integrated. And each project evolves rapidly. That's why there is a healthy market for Hadoop distributions that package these components into a reasonable set of implementable software.

But the biggest complaint among Big Data advocates is the current lack of data scientists to build Hadoop applications. These "wunderkinds" combine a rare set of skills: statistics and math, data, process and domain knowledge, and computer programming. Unfortunately, developers have little data and domain experience and data experts don't know how to program. So there is a severe shortage of talent. Many companies are hiring four people with relevant skills to create a virtual data scientist.

Evolution

One good thing about the Big Data movement is that it evolves fast. There are Apache projects to address most of the shortcomings of Hadoop. One promising project is Hive, which provides SQL-like access to Hadoop, although it's stuck in a batch processing paradigm. Another is HBase, which overcomes Hadoop's latency issues, but is designed for fast row-based reads/writes to support high performance transactional applications. Both create table-like structures on top of Hadoop files.

In addition, many commercial vendors have jumped into the fray, marrying proprietary technology with open source software to turn Hadoop into a more corporate-friendly compute environment. Vendors, such as Zettaset, EMC Greenplum, and Oracle have launched appliances that embed Hadoop with commercial software to offer customers the best of both worlds. Many BI and data integration vendors now connect to Hadoop and can move data back and forth seamlessly. Some even create and run MapReduce jobs in Hadoop using their standard visual development environments.

Perhaps the biggest surprise at Strata was Microsoft's announcement that it plans to open source its Big Data software by donating it to the Apache Foundation. Microsoft has ported Hadoop to Windows Server and is working on an ODBC driver that works with Hive as well as a Javascript framework for creating MapReduce jobs. These products will open Hadoop to millions of Microsoft developers. And of course, Oracle has already released a Hadoop appliance that embeds Cloudera's Hadoop distribution. If Microsoft and Oracle are on board, there's little that can stop the Big Data train.

Cooperation or Competition?

Although vendors are quick to jump on the Big Data bandwagon, there is some measure of desperation in the move. Established software vendors stand to lose significant revenue if Hadoop evolves without them and gains robust data management and analytical functionality that cannibalizes their existing products. They either need to generate sufficient revenue from new Big Data products or circumscribe Hadoop so that it plays a subservient role to their existing products. Most vendors are hedging their bets and playing both options, especially database vendors who perhaps have the most to lose.

In the spotlight of Strata Conference, both sides are playing nice and are eager to partner and work together. Hadoop vendors benefit as more applications run on Hadoop, including traditional BI, ETL, and DBMS products. And commercial vendors benefit if their existing tools have a new source of data to connect to and plumb. It's a big new market whose sweet tasting honey attracts a hive full of bees.

Why Invest in Proprietary Tools? But customers are already asking whether data warehouses and BI tools will eventually be folded into Hadoop environments or the reverse. Why spend millions of dollars on a new analytical RDBMS if you can do that processing without paying a dime in license costs using Hadoop? Why spend hundreds of thousands of dollars on data integration tools if your data scientists can turn Hadoop into a huge data staging and transformation layer? Why invest in traditional BI and reporting tools if your power users can exploit Hadoop using freely available programs, such as Java, Python, Pig, Hive, or Hbase?

The Future is Cloudy

Right now, it's too early to divine the future of the Big Data movement and predict winners and losers. It's possible that in the future all data management and analysis will run entirely on open source platforms and tools. But it's just as likely that commercial vendors will co-opt (or outright buy) open source products and functionality and use them as pipelines to magnify sales of their commercial products.

More than likely, we'll get a mélange of open source and commercial capabilities. After all, 30 years after the mainframe revolution, mainframes are still a mainstay at many corporations. In information technology, nothing ever dies; it just finds its niche in an evolutionary ecosystem.


Posted March 2, 2012 9:03 AM
Permalink | No Comments |