Over the past three-plus decades, it’s hard to think of a business computing-related technology that’s driven more fundamental value than virtualization, and it’s for a very simple reason: Since hardware evolves at a far faster pace than software, systems tend to deliver far more performance than needed, meaning they are drastically underutilized.
That may not seem like a big deal. After all, isn’t it better to have more power than you need? So that, say, when you’re driving up a freeway on-ramp you can accelerate and merge into traffic safely?
If computers had the equivalent of gas pedals, the discussion would be moot, but servers typically run at a single “speed” no matter what the requirements of the application happen to be. Plus, the ratio of available compute power to what is actually needed makes running a single application on a standalone server the computing equivalent of driving to the grocery store in a Formula One car or chauffeuring the kids to soccer practice with a Peterbilt big rig.
Consider also the larger costs of grossly inefficient system utilization. In many servers and many applications, that can be considerably less than 10 percent — meaning that more than 90 percent of the energy used to power those systems is essentially wasted.
Multiply that stark inefficiency across the thousands or tens of thousands of servers and workloads in an enterprise data center — along with the CRAC systems needed to cool the lot — and you’re talking about real money.
Enhancing system efficiency and utilization have always at the heart of IBM’s Virtual Machine OS development, which first became available on the company’s S/370 mainframes in 1972 and continued through later iterations, including current z/VM solutions.
Those goals were also core to IBM’s Power Virtualization, as well as in competing offerings developed by HP and Oracle (Sun), and x86-based solutions from VMware, Microsoft (Hyper-V) and vendors leveraging the open source Xen and KVM hypervisors.
How does virtualization achieve these benefits? By allowing individual system resources, including CPU, memory and storage, to be divided and shared among multiple virtual machines that each support individual OSes and applications.
As a result, the workloads from multiple servers can be consolidated onto far smaller numbers of virtualized servers. Plus, the more powerful and capacious the virtualized system, the more VMs and workloads it can successfully accommodate.
In the case of IBM’s zEnterprise and Linux on System z mainframe solutions, that can translate into single systems supporting hundreds or even thousands of VMs running the company’s z/OS, SUSE Enterprise Linux Server or Red Hat Enterprise Linux, plus their attendant applications. Plus, fully leveraging System z virtualization technologies allows those same systems to run at or near 100 percent utilization.
That’s where CSL International and IBM come in. If you think provisioning, monitoring, managing and maintaining the virtualized resources on IBM mainframes can be complex, you’d be entirely correct. Yet simplifying those processes and increasing the productivity of mainframe sysadmins have been among CSL International’s primary goals since the company’s founding in 2004.
In fact, the company describes itself as a provider of “a vast spectrum of services in the System z and Enterprise IT world,” including the CSL-WAVE solutions highlighted in IBM’s CSL acquisition announcement.
Overall, CSL International should be a perfect fit for IBM, and the deal is good news for both companies and their mutual customers. The acquisition should also bolster IBM’s mainframe modernization and simplification efforts.
Those have certainly played key roles in the ongoing success of System z in traditional enterprise markets, but they are likely to be even more critical as IBM continues to promote its zEnterprise and Linux on System z as ideal platforms for demanding cloud computing infrastructures and services.