Solving IBM's IT Conundrums: 'Integration' Is the Word
There are inherent dangers in fighting the marketplace's gravitational forces. ... Despite the "dictates of computer science," over-amplifying the value of traditional mainframe technologies or attempting to inject System z into areas like cloud, where it has achieved minuscule success compared to commodity servers, can make IBM sound like a cranky geezer waxing poetic about the good old days.
If it's late November or early December, I'm usually traveling to or from Westchester County, New York, home of IBM and its Software Group (SWG) and Systems and Technology Group's (STG) annual IT analyst confabs. In practical ways, these events tend to mirror one another; past strategies and solutions are trotted out for a quick going-over, current efforts are polished and examined, and future plans are discussed at some length. In that sense, this year was very much like every one before. But 2011 also marked the third analyst conference since IBM formally integrated STG with SWG, placing the entire shebang in the hands of SVP and Group Executive Steve Mills.
The 2009 STG conference occurred just a few weeks after this realignment was announced, so if any analysts expected major changes, they came away empty-handed. Last year, signs of the reorganization were increasingly apparent, particularly in the company's emphasis on new, highly integrated business analytics/intelligence solutions, including then-recently acquired Netezza, and the decision to place the Systems Software organization (which manages hardware-related software including IBM's operating systems, system management and virtualization products) into STG.
At STG's "Smarter Computing" analyst conference this year, the melding of STG and SWG was even more clearly apparent. While a few new hardware bells and whistles made their public debut, most were described and defined according to how they contributed to or enhanced the overall performance and business value of broader IBM offerings. Plus, this approach is fundamental to the three pillars of Smarter Computing discussed by IBM STG SVP Rod Adkins:
Tuned to the Task -- IBM systems optimized for the characteristics of specific workloads
Designed for Data -- IBM business analytics, business intelligence and big data solutions, which extend beyond traditional information sources
Managed With Cloud Technologies -- IBM product and service offerings designed to evolve clients' data centers while improving service delivery
In this way, the messaging at the STG event was quite similar to IBM SWG's "Connect" analyst conference the week before. But at Connect, the emphasis was on system "capabilities" -- how IBM finely tunes software and hardware for specific applications or processes. In comparison, the STG event's thematic focus was "workloads" and the importance of supporting them with right combinations of hardware, middleware and systems software. This, in turn, cast a spotlight on IBM's three distinct server platforms and the appropriate business problems they aim to solve. It also illuminated longer- term IBM Research projects and growth market development efforts.
Workloads as a Theme
During his keynote presentation and Q&A session, SVP Mills stressed the critical importance of workload optimization and system integration to IBM clients and the company itself, and for good reason: The Producer Price Index of compute products shows a 9X increase in overall performance during the past 15 years.
At the same time, enterprises' server spending has remained essentially steady, while electrical power and systems/facilities management have gone through the roof. This places enormous pressure on businesses trying to hold the line on data center costs but it also, as Mills put so succinctly, "puts the squeeze on hardware-centric vendors. In fact, unless they institute software, management and efficiency improvements, hardware vendors are doomed."
IBM has innovated in all these areas for years, but the next major step lies in further optimizing workload performance for systems for every kind, whether they are general purpose servers or specialized appliances. In IBM's view, this process can happen most anywhere: in the customer's data center (implemented by IT staff or with the help of IBM service professionals); in the factory (for both individual clients' and special use cases); or by design (appliances developed/integrated for specific applications and processes, like Netezza's Data Warehouse Solution).
Not surprisingly, Mills and other company executives expressed strong opinions about the appropriateness of particular server platforms for specific business applications and processes. IBM's System z mainframe's unparalleled online transaction processing (OLTP) and the company's Power Systems' muscular database performance were common (and regularly used) examples. However, I was happy to see that the company's System x (x86) solutions had a much higher profile than they did at last year's STG event, both for general purpose computing applications and as central parts of IBM workload-optimized system and appliance strategy.
The Workload Conundrum
At the same time, Mills' impassioned comments drifted toward what might be called the 'workload conundrum' when he noted "the constant frustration of seeing buying decisions based on issues unrelated to the dictates of computer science," and cited customers "continuing to drive toward x86 [solutions] despite the technical superiority of IBM's mainframe systems." Mills also noted the irony that so much or most cloud development -- with its inherently shared design that should play to the mainframe's strengths -- centers instead on scale-out x86 technologies.
There are numerous good reasons for IBM and its executives to focus attention on System z, especially since competing vendors spend so much time lambasting the mainframe as an outdated/outmoded platform. While those opinions have had little impact on enterprise customers, you can't be too vigilant when it comes to key products. Plus, there are strong commercial arguments for IBM to constantly talk up its scale-up System z and Power Systems.
However, there are also inherent dangers in fighting the marketplace's gravitational forces. Yes, scale-up systems offer highly attractive margins. But x86 server volumes dominate virtually every major global computing market. Despite the "dictates of computer science," over-amplifying the value of traditional mainframe technologies or attempting to inject System z into areas like cloud, where it has achieved minuscule success compared to commodity servers, can make IBM sound like a cranky geezer waxing poetic about the good old days.
Workloads and Growth/Future Markets
Focusing on scale-up systems may also affect IBM's results in emerging growth markets, including Africa where the company has great hopes and is working hard on development efforts. During a fascinating presentation, IBM's GM of Middle East and Africa, Takreem El Tohamy, noted that while clients want cutting-edge IT and are increasingly buying high-end systems (especially in communications, banking and government), low end x86 solutions are critical to IBM's efforts.
That could be problematic. While IBM has a full complement of x86 products, many of the company's best-known System x solutions focus on higher-end workloads. In fact, IBM defines its System x strategy as "defining next-generation x86." Plus, the company appears to assume that customers will eventually and happily abandon their favored x86 vendors for higher-end IBM systems. That contradicts the commonly held belief among vendors that clients stick with the vendors that help them grow.
At the same time, IBM's workload strategy is clearly apparent in future-focused efforts, including developing commercial Watson solutions and in cognitive computing research. For anyone who spent 2011 hiding in a bunker, Watson is an advanced question-answering system based on IBM's Power 750 servers, which can respond to verbal/vocal queries. That remarkable ability was shown to great effect on the TV game show "Jeopardy!" when Watson thoroughly trounced two past grand champions. The STG Smarter Computing conference featured several sessions on workplace scenarios for these systems, including an ongoing project at Columbia University, where Watson is being used to aid medical diagnoses.
The last formal session of the STG event was a presentation by Dharmendra Mohda, who manages IBM Research's cognitive computing project. While early efforts focused on creating computer simulations of increasingly complex animal brain activity, IBM recently created its first cognitive silicon -- a CPU designed to replicate the activities of brain cells and synapses.
These should play a critical role in the development of systems that emulate the brain's computing capabilities, efficiency and power usage without being traditionally programmed. Plus, while a fascinating project in its own rights, the efforts of Mohda and his team could eventually end up in numerous IBM commercial efforts, including its Smarter Planet solutions.
I came away from IBM STG's Smarter Computing IT analyst forum with a better understanding of why the company believes consolidating its hardware and software organizations was both practically and strategically crucial. Whether IBM's go-to-market messaging focuses on "workloads" or "capabilities," the aim is essentially the same -- to develop and deliver optimized, efficient IT solutions that offer customers superb compute performance and provide them the means to transform their businesses for the better.
Creating highly integrated solutions requires closely integrated organizations, which IBM and its executives demonstrated amply. Not all the group's efforts are fully formed -- many are early in their development cycles or are even in a transitional state. But the overall message from Westchester County was that IBM STG is on track and heading in the right direction.