We use cookies and other similar technologies (Cookies) to enhance your experience and to provide you with relevant content and ads. By using our website, you are agreeing to the use of Cookies. You can change your settings at any time. Cookie Policy.


Blog: Merv Adrian Subscribe to this blog's RSS feed!

Not Pictured

Welcome to my BeyeNETWORK blog! Please join me often to share your thoughts and observations on new analytic platforms, BI and data management. I maintain a vendor-focused practice that uses primary research, briefings, case studies, events and other activities that stimulate ideas as a source for commentary on strategy and execution in the marketplace. I believe the emergence of a new class of analytic platforms, and emerging data management and advanced tools herald a next step in the maturity of information technology, and I'm excited to be present for its emergence. I hope my blog entries will stimulate ideas that will serve both the vendors creating these new solutions and the companies that will improve their business prospects as a result of applying them. Please share your thoughts and input on the topics.

 

 

Recently in Software Category

Xkoto, the database virtualization pioneer, has generated substantial interest since its first deployments in 2006. Still privately held and in investment mode, Xkoto sees profitability on the horizon, but offers no target date, and appears in no hurry. Its progress has been steady: in early 2008, a B round of financing led by GrandBanks Capital allowed a step up to 50 employees as the company crossed the 50 customer mark. 2008 also saw Xkoto adding support for Microsoft SQL Server to its IBM DB2 base. Charlie Ungashick, VP of marketing for Xkoto, says that 2009 has been going well, and the third quarter was quite strong. And at the end of September 2009, Xkoto announced GRIDSCALE version 5.1, which adds new cluster management capabilities to its active-active configuration model, as well as Amazon EC2 availability.

"Traditional" models of passive failover and passive disaster recovery are high cost, inflexible architectures that xkoto's GRIDSCALE replaces with multiple identical databases copies that it manages in an active-active configuration for lower cost scale-out and disaster recovery (DR.)  Applications don't "see" it; GRIDSCALE captures the SQL statements and replicates them to the copies - both DDL and DML - with an optimistic, non-synchronous protocol: the first successful response goes back to the application.

Customers include CNN, Puma, HSBC, and the US Department of Homeland Security. Ungashick says Western Europe is doing very well - half of Q3 revenue came from Europe. The firm has recently added direct sales headcount in South Africa and Asia, and is continuing to add more. The partnership with IBM has been instrumental in some big wins for both parties. Xkoto is arguably the closest DB2 has to answer Oracle's RAC, and Xkoto participated with IBM in several deals that needed this capability.

GRIDSCALE version 5.1's Amazon EC2 availability enables multiple DB2 databases running in the cloud to work together, and avoid the limitations formerly resulting from the lack of shared storage - allowing load distribution in the cloud for the first time.Version 5.1 also adds automatic recovery, Kerberos support for authentication, and other features as described here.

The SQL Server version of GRIDSCALE 5.1 features a new, "driverless" configuration. Native SQL Server drivers, including ADO, .NET Framework, and OLE DB are now supported; GRIDSCALE has implemented the tabular data stream (TDS) protocol Microsoft inherited and updated from Sybase. Microsoft SQL Server Enterprise Manager and other tools compatible with Microsoft interfaces can be used for management of the server instances. Ungashick says he's seeing more opportunities with Microsoft where the competition is Oracle RAC, similar to what Xkoto had already been seeing with DB2 prospects:

A number of situations have arisen recently with SQL Server customers recognizing that their data warehouses are not adequately designed for availability and disaster recovery. As the DWs become more important to the business, we think we'll see much more interest in that use case," he said. 

Xkoto is hoping to leverage the Microsoft community to drive business there.  The recognition the company has already received - Best of Microsoft Tech•Ed 2009 and Gartner's "Cool Vendor in IT Operations and Virtualization" among them - will go a long way towards boosting its visibility. This is a promising model, and Xkoto has the early lead - which will be a challenge to hold as the big database vendors add their own capabilities in this area. Meanwhile, Ungashick says, there are other database products that could use similar capabilities and we can expect to see announcements with others in the year ahead.


Posted December 21, 2009 11:00 AM
Permalink | 1 Comment |

At IBM's 8th annual Connect meeting with analysts, Steve Mills, Senior VP and Group Executive, had much to crow about. Software is the engine driving IBM's profitability, anchoring its customer relationships, and enabling the vaulting ambition to drive the company's Smarter Planet theme into the boardroom. Mills' assets are formidable: 36 labs worldwide have more than 100 SW developers each, plus 49 more with over 20 - 25,000 developers in all. Mills showcased all this in a matter-of-fact, businesslike fashion with minimal hype and little competitor bashing. A research project aimed at extending Hadoop usage to a broader audience was among the highlights. 

Mills gave us a look at his organizing principle:

We have been working on extending the notion of what middleware is. It's about connecting an organization's applications, the codification of business process and function."

Companies from midmarket to large enterprises run thousands of applications; understanding customers' business scenarios, addressing identified gaps and promoting recommended patterns for success - adoption routes, solution stacks - is the driver. "It's very easy to make a mess if you're not guided," Mills points out. He's an effective, dedicated proponent of IBM's Smarter Planet theme, and returned to it at this event, pointing out how IBM-supported projects that instrument and enhance the world's often aging physical systems pay for themselves in efficiency savings even before the larger goals they enable are considered. He also held forth on other favorite topics: Industry Models, Cloud Computing ("You have to talk about Cloud"), and more, but told us he'd promised not to use all his team's best slides before they could. "Not that I can't talk about all of it," he joked - and we've seen him do it. But no 3 hour keynotes here, mercifully, unlike some other vendors' recent events.

Bringing Hadoop to Business Users

In his presentation, Rod Smith, VP, Emerging Internet Technologies, made it clear that the company is not ignoring the MapReduce/Hadoop phenomenon. He referred graciously to Cloudera's work and picked up their phrase: "big data." With the world creating nearly 15 PB of new data per day, a new class of content-centric WebApps is on the horizon, typically "longer running apps" - customers Smith talks with don't like the word "batch," he noted. But his focus was different from other vendors I've been hearing, where there is an assumption that the "big data" opportunity is limited to the sophisticated programmers who have so far led the way. Instead, "Put the business person in the center of the data," Smith suggested. "They want their own Google" - here meaning not a search engine, but a data interaction tool capable of visualization and other forms of manipulation.

It's clear that the need for such solutions will be there, and someone will fill it. When a firm like Extrabux can process 40Gb/day, loading and indexing 70 million constantly changing input records for MapReduce by processing on Amazon's EC2 cloud for less than $5000 per year - with no DBA - others will follow. (See the September issue of Charles Brett's Insight-Spectra  for details of this case study.) Like other explorers in this new mode, Smith offered his own great examples, including  a Visa risk modeling app using Hadoop with the R statistical libraries that reduced an analysis literally from 1 month to 13 minutes. "This is not incrementally better; it changes everything," he said.

Smith's Big Sheets project showed off analysis performed on over 2 million patent documents - a "one person project, like all my things." He referred to the iTunes interface and showed a similarly clean, intuitive model. And he pointed out that "the data operated on does not always get reduced; here it exploded, because one analysis was of how patents made references to other patents." Similar things happen when analyzing social graphs; it's why focusing on MapReduce alone to describe these cases doesn't always paint the full picture. It's just one step in more complex processes that can be distributed around large systems which scale on demand as needs dictate. Similar thinking about user empowerment, without the elastic scaling (yet), is behind Microsoft's PowerPivot, which treats Excel as the UI, and adds operators to the Excel language which mimic the kinds of things MDX programmers can do with OLAP cubes, among other things.

IBM is looking past today's MR cases, which are often reminiscent of early computing days, when specialists spent days to set up machines for a single program run. The problem then was scale too, and learning how to use machine resources efficiently was job one. Today, the economics have flipped - we understand that the people resources are more valuable and we have to empower them. IBM is looking beyond complex setup, java coding and single run models for "big data" processing and towards interactive big data analysis - at Web scale. In Smith's view, that's the key to going into an "evidence-based business world." IBM is focused on hiding the complex details of system parallelization, fault tolerance, load balancing, etc. from the user by hiding everything behind the UI. Tech details weren't at the top of Smith's agenda for this presentation, but REST interfaces, the use of Jackal, extensibility via UDFs, integration of Pig, and exporting results into feeds and XML were briefly highlighted. As IBM continues to push at this area, we can expect to see some breakthrough innovations emerge, in larger, end-to-end scenarios.


Posted December 3, 2009 9:41 AM
Permalink | No Comments |

1 2 NEXT