What is virtualization anyway? This five-part series on a trend that is reaching into every corner of the technology landscape addresses that fundamental question, along with many of the complex issues that flow from it.
The short answer is, virtualization is separating the physical layer (the servers or PCs or whatever box you’re running your operating system and applications on) from the logical layer — the operating systems and applications.
You can virtualize pretty much any aspect of IT, as the list of topics to be covered at the third Virtualization Conference, to be held in New York June 23-24, demonstrates. It looks like this:
I/O Virtualization, Application Virtualization, Desktop Virtualization, Network Virtualization, Physical to Virtual (P2V) Migration, Server Virtualization, Storage Virtualization, Virtual Machine Automation, File Virtualization, Management Applications, Tools and Utilities, Virtualization Scripts and Procedures, Paravirtualization, Real-time Virtualization, Device Virtualization, Thin Client Solutions, Virtual Desktop Infrastructure, Greening of IT, Vista and Virtualization, Microsoft’s Remote Desktop Protocol, Virtual Machine Desktops, Security and Performance Issues, Virtualization Security Audits, Utilization and Performance, Virtual Backup, Systems Integration, Cloud Computing, High Availability (HA), KVM (Kernel-based Virtual Machine), Virtual Appliances.
Tough to Pin Down
So, where do we begin?
So far, the most successful vendors in terms of winning the hearts and minds of users are those who virtualize the desktop.
Most people think virtualization is “just CPU virtualization as applied by hypervisor companies, including Microsoft, VMware and Xen,” Kevin Epstein, vice president of products for Scalent Systems, told TechNewsWorld. “But it’s more than that.”
It certainly is: “There is no universal definition of virtualization as it means many things to many people and can apply to the entire infrastructure — from data center virtualization, desktop virtualization, storage virtualization, etc.,” said Lionel Lamy, research director at IDC.
Why is this? Because vendors in every area of IT are trying to virtualize their products in order to catch the wave of interest in this area.
“Cisco is in the business of network virtualization, database vendors are talking about database virtualization, caching vendors are talking about virtualizing data caching; and there are vendors looking to virtualize I/O (input/output),” Gordon Jackson, virtualization evangelist at DataSynapse, told TechNewsWorld.
What Are Hypervisors?
A hypervisor is a virtualization platform that lets users run multiple operating systems on one host computer simultaneously.
You can have a Type 1 (also called native or bare-metal) hypervisor, or a Type 2 (hosted) hypervisor.
The Type 1 hypervisor consists of software that runs directly on any given hardware platform as an operating system.
Examples are Oracle VM, VMware’s ESX Server, IBM’s LPAR, Microsoft’s Hyper-V, Sun Microsystems’ Logical Domains, TRANGO, and Xen.
Then you have KVM, which turns a complete Linux kernel into a hypervisor, and Hitachi’s Virtage hypervisor, which is embedded in the platform’s firmware.
The granddaddy of all Type 1 hypervisors was IBM’s CP/CMS, developed in the 1960s.
The Type 2 hypervisor is software that runs within an operating system.
Examples are VMware Server; VMware Workstation; VMware Fusion; QEMU, which is open source; Microsoft’s Virtual PC and Microsoft Virtual Server; and SWsoft’s Parallels Workstation and Parallels Desktop.
The Virtualized Business Future
Mark Linesch, HP’s vice president for infrastructure software, takes a more futuristic view of virtualization.
HP considers virtualization to be an essential part of “the next-generation data center built with a set of technology enablers — modular systems, the ability to virtualize server, storage, network resources; the ability to manage and automate core IT processes in relationship to the business,” he told TechNewsWorld.
Linesch sees virtualization as “a way to break down IT silos many organizations have when they automated human resources and the supply chain and the customer relationship pieces — which were fixed to physical resources — and moved to a shared pooled environment where you have the flexibility to response to rapidly changing business dynamics.”
Major Unix vendors — Sun Microsystems, HP, IBM and Silicon Graphics — have been selling virtualized hardware since before the turn of the century. Generally, these have been large, multimillion-dollar systems, although some mid-range systems such as IBM’s System P servers and Sun’s CoolThreads T-series servers were also included.
Multiple-host operating systems have also been modified to run as guest operating systems on Sun’s Logical Domains hypervisor. These include Solaris, Linux variations Ubuntu and Gentoo, and FreeBSD.
In IBM, this approach was known as “logical partitioning,” or LPAR, and was available on the S/390, zSeries, pSeries and iSeries systems.
Moving Down to the x86s
The mantra of all detectives is, “Follow the money.” That can be applied to the high-tech world too: The high profits vendors reaped from servers led to the introduction of hypervisors for the X86 and X64 worlds, where open source efforts such as Xen have led virtualization efforts. These include hypervisors built on Linux and Solaris kernels.
VMware, Parallels Workstation, and Parallels Desktop for Mac are among the hypervisors in this area.
“Vendors hadn’t figured out how to virtualize Intel or AMD-based systems until recently, and now, 90 percent of the market that was closed to them is open,” John Humphreys, IDC’s vice president of virtualization research, told TechNewsWorld.
The problem is, the x86 architecture is particularly difficult to virtualize. Full virtualization on the x86 is expensive in terms of the complexity of hypervisors required and degrades runtime performance.
The solution: Paravirtualization, where the guest operating system is modified to make system calls to the hypervisor rather than executing machine I/O instructions which are then simulated by the hypervisor.
Many CPUs Make Light Work
Most virtualization applications virtualize the software, letting users reduce the amount of hardware they require by running several instances of an operating system or application over one or very few boxes.
Digipede Technologies does just the opposite: It virtualizes the hardware, letting users run an application over multiple CPUs at once.
“Our clients require supercomputing power; they don’t care how many machines their application runs on, if they need a 500-CPU box, the Digipede Network makes it look to them like they’re using one 500-CPU box,” Daniel Ciruli, the company’s director of products, told TechNewsWorld.
This approach is only suitable for compute-intensive applications that make lots of calls to the operating system, and most of Digipede’s clients are in the financial and housing industries.
Main Areas of Virtualization
There are four key IT domains in which virtualization is taking place, Forrester analyst Galen Schreck said in a paper, “The Virtualization Imperative.” These are servers, storage, networks and end-user clients.
The pace of virtualization in each of these differs “because each domain is managed separately, and virtualization is implemented in a different manner and for different reasons,” he said.
Most of these domains have more than one type of virtualization, some of which are complementary.
So, to recap, virtualization is simply separating the logical and physical layers, and you can virtualize almost any part of IT; the key questions to be kept in mind are, why are you virtualizing, and what is it you are virtualizing.
The Virtualization Challenge, Part 2: Making the Case
The Virtualization Challenge, Part 3: No Bed of Roses
The Virtualization Challenge, Part 4: Implementing the Environment
The Virtualization Challenge, Part 5: Virtualization and Security