The Virtualization Challenge, Part 2: Making the Case

Part 1 of this five-part series defines the various types of virtualization. This installment looks at the business reasons for virtualizing your IT system. How will virtualization contribute to your top or bottom line? How much can you save?

The function of IT is to support business, and every IT decision must be business-driven. So, what kind of business benefits can virtualization provide?

It can increase server utilization; help cut costs; let users leverage the multi-core processors common in data centers; give applications mobility, thus improving data recovery capability; reduce physical downtime and hence disruption of service; make it cheaper, faster and easier to do application testing and deployment; and it lets an enterprise pool its servers, maximizing their use and cutting down on the need to buy additional servers.

Wasted Server Capability

In most data centers, servers tend to be only 10 to 15 percent utilized, IDC’s John Humphreys told TechNewsWorld.

There are two reasons for this: First, IT needs to plan for excess capacity to meet peak period demands; and second, each individual business department has traditionally had its own servers in the data center.

“The way it’s always done in data centers is to overprovision, and then there’s under-utilization,” DataSynapse’s Gordon Jackson told TechNewsWorld.

“Say I allocate three servers to an application and, if I did everything right — load testing, Q&A, and if my (internal) customers behave the way I expect them to during peak hours, that’s fine; but after peak hours until the rush begins the next day, I may only need one server for those customers,” he added.

If there is a miscalculation and, say, two more servers are required during peak hours, then IT has to get the business department involved to buy them; and, during off-peak hours, four servers will now be idle.

Increasing Server Utilization

“When you virtualize server hardware, you can have multiple systems running on a single host or server and increase utilization to 30 or 40 or 50 percent,” IDC’s Humphreys said. “That also reduces the hardware footprint, which cuts capital costs, cooling and power costs, and eventually running costs.”

Some corporations have consolidated 150 physical machines onto “maybe 10 or 15” HP Proliant blade servers, HP’s Mark Linesch told TechNewsWorld.

That raises the utilization rate “from 20 to 80 percent” and “frees up both human and financial capital” so the enterprise can “put them to better use in launching new business initiatives,” Linesch added.

Cutting Costs

The cost savings are phenomenal: Some enterprises had to decide between “making a (US)$20 million investment in a new data center” and investing “a few hundred thousand dollars in virtualization,” IDC’s Humphreys said. “So they can go from 900 servers to maybe 200, and that’s maybe $4 million in savings,” he added.

Encapsulation virtualization lets users put multiple virtual machines on one host and put operating systems and applications on each virtual host but each of the operating systems and applications is isolated from the others, IDC’s Humphreys said. That “lets you take advantage of the multi-core processors in data centers safely and securely,” he added.

Again, this reduces the cost of running a data center because it cuts down on physical server requirements, cabling, cooling, electricity consumption and hardware and software maintenance.

Data Mobility Eases Recovery

Virtualization also enables data mobility — the virtual machine is simply a file that can be replicated from one data center to another, making it easy to recover data, Humphreys said.

This lets enterprises restore more of their data.

“Most enterprises protect the most crucial 20 percent of their assets; they don’t protect the rest because data protection’s so expensive,” Humphreys said. However, because a lot of organizations in government are requiring partners to be disaster resilient in data centers, “you’re starting to see people use virtualization as a low-cost alternative to disaster recovery”, Humphreys added.

Reduced Downtime Through Mobility

Another benefit of mobility is that it lets users migrate data live to other physical servers without any downtime.

“Now, IT can do hardware upgrades and swap boxes during normal business hours instead of Friday nights at 11 p.m. like they used to, and they don’t have to take down the application,” Humphreys said.

Hence, there is no disruption to users.

This is a way to avoid unplanned downtime, Humphreys said. “If you have insight into whether your hardware’s going to crash, you just migrate it all to other virtual machines on other servers and, when your hardware goes down, it doesn’t affect service levels,” he explained. “Now you have a continuity solution.”

Business continuity — the ability to keep your business up and running with minimal loss of data and uptime — is a huge factor in an enterprise’s success.

Easing Testing and Development

Testing and development traditionally needs a big rack of servers, all of which needed to be set up manually.

If, for example, you wanted to install and test Windows on bare metal running Microsoft Exchange e-mail, with the computers connected to each other in a star network and to an e-mail server, and to central storage, you had to manually install the operating system and applications, and physically connect the cables, Scalent’s Kevin Epstein told TechNewsWorld.

Then, if you wanted to test the same machines running Linux and Solaris, “you had to turn off the machines, reinstall the software, change the cabling and networking, install new IP addresses and so on”, Epstein added.

Scalent’s software automates the process.

“As any physical machine boots up, we can tell it to boot up, and as they boot or reboot, they issue a standard network boot request asking whether to boot from the local drive or network storage and we respond to that request and set up an IP address, a storage address, then boot whatever OS and application stack you want,” Epstein explained.

“The hard part of setting up any system of servers is getting the right network and storage connections, not booting up the server,” he added.

Pooling Resources

One of the reasons data centers are over-provisioned is that, traditionally, each business unit in an enterprise has had its own servers.

DataSynapse’s software is changing that: By decoupling applications and application services from the operating system, it lets enterprises pool their servers for maximum utility, Jackson said.

“For example, if I have a WebLogic application server that I run in, say, three instances to satisfy a cluster for my system and have an infrastructure of 15 servers, today I may run the instances on servers one, two and three, and tomorrow on servers seven, eight and 11,” Jackson explained.

When an application is virtualized, adding computing power to handle surges in demand is easy: Users can pull up another virtual instance of an application when necessary, instead of having to call up a physical server that is on standby which has been allocated to the application.

When the load goes down, the virtual server instance is dynamically released back into the enterprise-wide pool of resources to be allocated elsewhere as needed.

That will let enterprises buy servers to meet their needs throughout the business rather than for individual departments, thus stretching the corporate dollar further.

“Now IT can say, we’ll need another four or five servers to serve all the business units, and that’s different from saying they’re rolling out more applications next year and will need more servers for each business unit,” Jackson said.

The Virtualization Challenge, Part 1: Many From One

The Virtualization Challenge, Part 3: No Bed of Roses

The Virtualization Challenge, Part 4: Implementing the Environment

The Virtualization Challenge, Part 5: Virtualization and Security

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

Technewsworld Channels