The Virtualization Challenge, Part 3: No Bed of Roses

In Part 1 of this five-part series, we define virtualization; in Part 2, we look at the business drivers for virtualization. Now we will look at the challenges enterprises face when they virtualize their IT environment.

So, we know virtualizing the IT system will save a lot of time, money and effort, and we know there are lots of tools out there we can get from different vendors.

Yet, as any savvy IT professional will tell you, introducing any new technology will bring about a host of new challenges.

These range from finding out that you don’t know what you have in your data center — a problem first encountered during the ramp-up to Y2K — to lack of cooperation between IT and business units, to unexpected costs, making sure you know what applications you are virtualizing and how, to a whole new set of problems, and to the crowning glory: increased end-user expectations.

Scoping Out Your System

Remember back during 1999 when organizations and enterprises preparing for the date change at the turn of the century panicked on finding out they did not know what applications they had, which boxes they were running on, and which ones were interlinked with which other ones?

That set off a wave of asset identification and discovery, and asset analysis.

You’ll face the same set of problems when it comes to virtualization.

“The first thing is to understand your environment, your application portfolio,” Gordon Jackson, DataSynapse’s technology evangelist, told TechNewsWorld.

‘A Paradigm Shift’

As enterprises virtualize their IT assets, they have to ensure their business units and IT are pulling in the same direction. If they don’t, “you won’t get the economies of scale from virtualization technology,” Jackson said.

After IT systems are virtualized, business units will request servers based on the importance of their application to the overall business, and IT will allocate virtual servers from the corporate server pool. If a business unit needs less server space than allocated during a particular month, it will get a rebate.

“We’re looking at a paradigm shift,” Jackson added.

Unexpected Costs

When enterprises reduce their server farms, they see the immediate cost reductions — but that isn’t quite as good as it looks.

“You still have to manage the servers, so you need additional manpower and tooling, and to refresh your images every time you add to your environment,” Jackson explained.

Virtualization is not a panacea, Kevin Epstein, Scalent Systems’ vice president of marketing, told TechNewsWorld.

“Each piece of virtualization has its own benefits and risks,” he added.

For example, if you use hypervisors to virtualize your systems, you can save power by consolidating applications that used to run on 10 physical servers onto one hardware box running 10 virtual machines.

That has its own problems: “If that one hardware machine fails, I lose 10 virtual machines; and configuration becomes more complex because I have one physical machine that must access all the storage, all the networks accessed by all 10 virtual machines,” Epstein said. “Also, there could be more CPU (central processing unit) overhead because the hypervisor itself is an operating system and takes up CPU space.”

Virtualizing the Right Apps

Not all applications are suitable for virtualization, and those that are need to be approached correctly.

That means you have to select the right tools from vendors. “You have different software architectures and very different deployment techniques, and that makes it difficult for software developers to apply techniques from one architecture to the other, Daniel Ciruli, Digipede Technologies’ director of products, told TechNewsWorld.

Also, different types of applications may have to be deployed differently. “Light applications may be deployed on a virtual machine that is running alongside other virtual machines on a physical box; heavier applications may have to be deployed on their own servers, either running in a virtual machine or directly on the operating system,” Ciruli said.

Really heavy applications that are CPU-intensive and which make lots of operating system calls, such as financial and scientific packages, “need to be deployed on many servers,” Ciruli added.

Shifting Bottlenecks

You can solve the problem of the single point of failure due to having only one physical machine by adding software like Scalent’s which automates the process of dynamically allocating servers.

However, then you might have a problem with storage or something else.

For example, you could have too many virtual instances of an application running. “The application may run 10 times faster on 10 machines, but only five times faster on 20 machines, and on 30 machines, it may actually run slower because too many calls are being made to the database,” Ciruli said.

Or you have beefed up your server farms by purchasing and installing lots of cheap commodity hardware, which may lower database and network performance, Ciruli said.

Every time you solve one problem, another will pop up. “It’s a shifting bottleneck,” Epstein said. “There’s always going to be a bottleneck somewhere.”

Virtual Machine Sprawl

With virtualization, as with everything else, there can be too much of a good thing.

“One of the challenges I run into most frequently is virtual machine sprawl,” IDC analyst John Humphreys told TechNewsWorld.

For example, one of his clients had 1,000 servers running 1,000 applications, and reduced the number of servers to 200.

About a year later, he found he was supporting 1,400 virtual machines.

“That was because, prior to virtualization, it took IT about three to six weeks to get a server up and running and supported,” Humphreys said. “They had to go out, buy servers, rack them and support them.”

After virtualization, it took three hours to set up a new virtual server, and the cost “was basically zero because they had the hardware capacity,” Humphreys added. “They just started handing out virtual machines like candy.”

The customer had failed to decommission and archive virtual machines that were no longer needed, he explained.

“They had no process in place to recognize that a virtual machine has a life of maybe a week but, at the end of that, you need to decommission it and archive it, and if you need it again, you can bring it back up again easily,” he said.

Users Expect More

Another problem Humphreys’ client faced after virtualization was increased user expectations.

“Now, instead of the gold standard in the company for bringing up a new server being three weeks, it was three hours,” Humphreys said. “That works well when you’re using 10 or 20 virtual machines but not when you’re using 300 virtual machines. IT was making promises they couldn’t keep later on.”

The golden rule when virtualizing your IT systems is expect the unexpected. Otherwise, that old adage about change being the only constant will hit you hard.

The Virtualization Challenge, Part 1: Many From One

The Virtualization Challenge, Part 2: Making the Case

The Virtualization Challenge, Part 4: Implementing the Environment

The Virtualization Challenge, Part 5: Virtualization and Security

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

Technewsworld Channels