IBM has found one way to sidestep the mainframe vs. distributed computing dilemma by simply calling its mainframes something else.
Today, IBM refers to its larger processors as large servers and emphasizes that they can be used to serve distributed users and smaller servers in a computing network. There are now high-performance software solutions designed to leverage the performance, security and reliability of the mainframe in the world of Web interfaces and SOA (service-oriented architecture). One notable mainframe application for distributed environments is IBM WebSphere Portal for z/OS — IBM’s market-leading enterprise portal software meets the mission-critical mainframe OS that extends OS/390 to IBM System z.
In IBM’s latest effort to keep Big Iron relevant in a fast-changing computing world, it continues to retool mainframe technology for small- and medium-size businesses with a “Little Iron” approach.
Steve Mills, senior vice president and group executive of IBM’s software unit, called it the world’s most powerful enterprise computer when IBM announced its next generation of mainframes — the System z10 Enterprise Class (EC) — in February. The z10 system steps up competition with HP and Dell for business customers, allowing businesses to better share, track and automate information among millions of users, according to IBM.
IBM redesigned the machines to better challenge server networks, which can cheaply deliver Web pages and computer files. The low-end z10 model starts at something less than US$1 million (though a fully loaded model with 64 physical processors can cost in the multiple millions). Thus, the z10 faces competition from nimbler and less expensive server computers from HP, Dell and Sun Microsystems, among others, whose top-line server computers cost around $250,000 and perform many of the same functions as IBM’s mainframes.
Brad Day, vice president and principal analyst at Forrester Research, said there’s a pent-up demand for the new system, especially from users in the retail and financial services industries that need the additional capacity supported by the z10.
“The mainframe was viewed as legacy technology, but now it’s consolidating new application workloads once handled by networks of smaller computers,” Day told TechNewsWorld. “In addition to satisfying the legacy base, IBM seems intent on marketing the mainframe as an alternative to ‘mid-frame’ or mid-range servers.”
Day said he expects a “baby mainframe” to be available within six months at a lower price point. In the past, IBM has sold these stripped-down versions of the mainframe for between $150,000 and $200,000.
For existing users, said Richard Partridge, an analyst at IT research firm Ideas International, the performance jump offered by the z10 should be enough to “slow down anybody who was thinking of abandoning the mainframe because they thought it was too sluggish.”
Mainframes or Supercomputers?
Mainframes occupy a market position between supercomputers and “minicomputers”– the medium-scale, centralized multi-user systems now more commonly termed “midrange computers” and “servers” for PC networks. However, with the general increase in computing power, the differences between the various systems are becoming less marked.
Nonetheless, the distinction between mainframes and supercomputers at the top of the computer hierarchy are important, according to Peter Ungaro, CEO of Cray, a Seattle-based supercomputer manufacturer that designs systems for the scientific and engineering marketplace.
Mainframes are designed to excel at business computing, which typically involves hundreds or thousands of transactions per second, explained Ungaro. By contrast, supercomputers are designed to run ultra-large scientific and engineering problems that typically take not seconds but hours, days, weeks, months or even years to complete. Supercomputers are used primarily for scientific research and for industrial research and development to create products ranging from automobiles to aircrafts to golf clubs. As an example, Boeing used 800,000 hours of time on Cray supercomputers while designing the Boeing 787 Dreamliner aircraft.
“Although it’s true that some companies are trying to leverage the same systems used for business operations to run scientific and engineering simulations, those users would have to make design trade-offs to serve two extremely different markets and end up with a very suboptimal design point,” Ungaro told TechNewsWorld. “Imagine trying to play tennis with a baseball bat. The best option is to purposely build systems for one function or the other.”
Mainframes or Servers?
At the heart of the case against the mainframe is the server. As the server solutions available today morph to become more like the mainframe, distributed systems become more open, offer increasingly agile software architectures, are less costly to run and maintain than mainframes. They offer traditional mainframe benefits like availability, scalability and server utilization. As mainframe technologies trickle down to distributed systems, those systems are getting better at hosting mainframe-class applications.
Today, companies are migrating workloads off the mainframe. Historically, the move was to Unix, but more recently it is to Windows. The types of migrations to Windows include re-platform packages (SAP on the mainframe to SAP on Windows), re-hosting (recompile mainframe COBOL for Windows), re-hosting with automatic transformation, e.g., to CICS to ASP.NET and re-writing/re-engineering (new Windows applications replace mainframe applications).
At the same time, the mainframe is becoming more like distributed systems. Designs are evolving to incorporate technologies such as Fibre Channel high-speed transport, InfiniBand input/output architecture and Java programming language.
Robert Frances Group (RFG) contends that mainframes can deliver processing power more efficiently than standard servers. In a white paper titled “Mainframe Computing and Power in the Data Center” from last year, RFG reports “Mainframe systems consume less power, both in absolute and relative terms [than standard servers]. Typically, mainframe power densities are less than half of those of current rack and blade distributed systems. When looking at like workloads, the amount of energy consumed falls precipitously, in some cases the costs associated for power needed for an application are reduced by a factor of 600.”
The largest impact on the mainframe market when compared to distributed computing comes not from lack of support or evolution of the mainframe but from the continued evolution of the other platforms to levels that provide “good enough” levels of functionality. “Good enough” distributed systems based on Unix and Windows are eroding the low end of the mainframe installed base. Responding to this reality, IBM, Unisys and others are moving to more open, industry-standard technologies. The mainframe still firmly holds its edge in complex environments. But the battle for the midrange — applications of up to 1,000 MIPS (million instructions per second), where the majority of mainframe applications fall — has already begun.
“Mainframes will continue to be used in environments that require specialized solutions that match the client’s business,” Bill Maclean, VP of ClearPath and AB Suite at Unisys Systems and Technology, told TechNewsWorld. “This includes specialized applications developed by the client and vendor-developed applications designed for large-scale requirements.”
A 2007 report from Ovum, an advisory services and consulting firm, sees no ambiguity in the future of the mainframe. “IBM’s mainframe technology has been referred to as a legacy platform that was on its last legs and no longer a strategic platform,” the report stated. “Challenged by more commodity solutions utilizing x86/x64 or RISC (reduced instruction set computer) technologies, the longevity of the mainframe has long been forecasted to be near its end. However, the realities of large investments in core application deployments on the platform and its long-heralded reliability, availability, serviceability, security, backwards software compatibility, efficient environmentals and virtualization capability, have rendered this prognostication moot.”
There are now only about 10,000 mainframes left in the world, according to Reg Harbeck of CA in “Strategic Vendor Consolidation and the Future of the Mainframe,” a 2006 white paper.
“Actually there have never been more,” Harbeck noted, “yet that’s been a large enough number to be the computing cornerstone of the world economy. [*correction] The mainframe’s not going away any time soon. For the preponderance of mainframe shops, including some of the largest organizations on earth, the scale and deep-rootedness of their mainframe contexts precludes a simplistic hop to other platforms. And there are enough such organizations to keep the mainframe alive indefinitely.”
*ECT News Network editor’s note: The original publication of this article left out the word “never” in the quotation cited from Reg Harbeck’s white paper. The erroneous statement read as follows: “‘Actually there have been more,’ Harbeck noted.” The excerpt is now correct. We regret the error.