When it comes to investing in virtualization solutions, the hurdles might have to do more with organizational policies than technology. Since these solutions frequently meld disparate entities within an enterprise — human resources, sales and marketing, manufacturing — each of which often have their own infrastructure, policies and personnel, a virtualization program could turn ugly.
However, corporations eager to take advantage of virtualization’s speedy hard-dollar return on investment and its ability to improve business agility have some tried-and-true tools available. Hewlett-Packard, for example, gives virtualization clients access to its IT Service Management unit to help them standardize operations and infrastructure. In addition, enterprises can — and often do — start small, implementing a virtualization solution in one department before expanding it companywide.
PC virtualization gives enterprises the opportunity to overcome the challenges of deployment and culture in one effort, according to Brian Gammage, vice president at Gartner. “PC virtualization will achieve broad appeal over the next five years,” he said, noting that heavyweights such as Microsoft and Intel now support this approach.
Visualization steps alone, such as consolidation and standardization, save companies money and time, even before they have completed a virtualization solution, said Nick Vanderzweep, director of virtualization and utility computing at HP. Vanderzweep spoke with the E-Commerce Times:
E-Commerce Times: When it comes to describing any type of technology, it seems companies have their own interpretations of what a particular term means. Could you describe how HP views virtualization?
Vanderzweep: I have to set up some context in that HP’s enterprise strategy revolves around something we call the Adaptive Enterprise, synchronizing business and IT to capitalize on change. One of the fundamental pieces, in order for us to accomplish that, is we have to make the IT layers changeable, and changeable in real time, so when there’s demand for resources in one application or one piece of business, we can shift resources dynamically from other areas of the company when they need it. Virtualization is all about making IT resources changeable in real time. That oversimplifies virtualization.
When I sit down with [analysts] or customers, they say, “That’s cool, but how the heck do you do that?” Every time the CEO says, “We’re going to focus on X now as a big thrust in the marketplace,” the CIO just starts sweating bullets. It’s very, very manual, and whenever somebody makes a mistake, they could blow another system out of the water. So the way of approaching IT over the past tens of years has been very static, you put things in and don’t change them.
Our more detailed definition — the next level down — revolves around: Here’s a tagline, an approach to IT that pools and shares resources so that supply automatically meets demand. I kind of read that one backwards when I talk to a CIO; I first talk about supply of resources automatically meeting demand. “Automatically” is one of the key words. That’s where you get that real-time movement of resources, be they computer systems, storage, networking, software, even services themselves, and being able to automatically reallocate those resources where the need is in the business.
We do that through pooling and sharing. In most cases, the typical computer system is bigger than what it needs to run on. What we need to do is share them between multiple applications by slicing it up into smaller computer systems. You might divide a storage device into smaller bits or a server device into smaller bits or a networking device into smaller bits, and then share it between multiple applications.
My background prior to HP was running data centers in the financial world — some computer systems are really, really big, and one computer system or one storage device isn’t enough, and then you’ve got to pool multiple devices together to make a logical, bigger set of IT resources.
It’s not just about technology. We can talk about one bit of technology here and one bit of technology there that helps you pool and share resources, but one of the big things people have to get themselves through is [that] this is actually a pretty significant cultural change for CIOs and IT departments.
ECT: Why is that — because it’s territorial?
Vanderzweep: Yeah, there’s a lot of that. Typically, if I looked at a manufacturing company, someone on the Web, let’s say, the VP of sales would own all the IT infrastructure and the processes to do with the Web retail systems or retail systems. The VP of manufacturing would own the ERP [enterprise resource planning] system and all the infrastructure and people that manage that environment. Various different VPs have budgets, have bought infrastructure and have hired people to manage that infrastructure. What we’re proposing here as part of virtualization is that these various different people in the organization shake hands and say, “Absolutely, we’re going to work together: Buy infrastructure, share it and, when manufacturing needs infrastructure, they’ll get it, and when sales needs infrastructure, they’ll get it.”
Now there are ways to actually move resources around electronically. You don’t have to have someone come in and move wires around. You can actually reallocate resources. What happened in the past as well was over-provisioning: I would put enough resources into the retail system to handle peak, Christmas, say. For the ERP system, I’d put in enough resources to handle quarter-end because, typically, your sales force closes a lot of orders at the end of a quarter because they get pressure from management, and then manufacturing has to go crazy.
There’s a huge amount of over-provisioning that happened for applications which was OK because you couldn’t move resources from HR to ERP or Web retailing. With virtualization, the idea is you don’t have to put in enough resources; you drive utilization rates up because when you’re not using it for HR, you can use it for Web retail, but when HR needs it, you can give it back almost instantaneously.
ECT: So it sounds as though part of the ROI revolves around actual dollars and cents that companies won’t have to invest in hardware, infrastructure and personnel?
Vanderzweep: That’s part of it. A piece of the puzzle that people get as benefits here is cost in actual amount of infrastructure. Typically, we see with [our virtualization] customers we can double their utilization rate, which means they get twice as much bang for the buck for the infrastructure they have or they have bought. That’s a pretty good cost reduction.
We also increase their agility, which seems to be more and more and more important these days. It keeps coming up and down with the economy: Is it coming up? Is it coming down? Is it staying flat? Well, I’ve dealt with a few different companies where things are increasing so there are a lot of companies out there that are saying, “OK, we’ve gone through three years of consolidation, cost reductions. Can we now put something in place so we can be agile so that when the economy does come back, we can react fast to it, because I need to be there faster than my competitors?” So speed and agility are becoming more and more important.
Virtualization allows for speed and agility. It allows for cost reductions. It allows for simplification.
Typically, you see one administrator for every seven to 15 servers in a company. It actually gets worse if you have hundreds of servers in a company: The ratio of servers to administrators gets worse, rather than getting better, because things get more and more complex. By implementing virtualization with built-in automation, you can actually have one employee manage a lot more storage, a lot more servers, a lot more infrastructure and services because of all these automation tools that come into play.
ECT: Cultural changes can be a lot harder to deal with than technological changes.
Vanderzweep: We have an IT Service Management practice within HP’s consulting organization. The first thing [companies] have to do is standardize how they manage and operate their infrastructure across the datacenter or multiple datacenters. If you think about the VP of sales who hired a bunch of people and bought a bunch of infrastructure, the VP of engineering hired a bunch of people and bought a bunch of infrastructure — the job descriptions for the administrators of those two groups could be completely different.
First step is you get them to shake hands and say, “We’ll work together.” The next step is there are people who actually have to do it. We help companies standardize that kind of thing across their organization. There’s actually a consortium and set of standards on how to operate datacenters based on ITIL, and that’s part of our practice. We come in, assess where that company is, we show them the ITIL standards, we help rewrite job descriptions, train people and get them all so they can actually talk to each other. That’s the first step.
Consolidation is a prerequisite to virtualization because if you’ve got 400 servers under 400 people’s desks in 400 locations across the country, it’s pretty hard to share resources and manage those resources. Consolidation to a single or a few datacenters where they’re all co-located is important, then we can implement some of this technology.
There are cost savings by standardizing processes — you get 5 percent, 10 percent cost savings just by standardizing your processes. By consolidating infrastructure, you can get some pretty good bang for the buck. And then by virtualizing, you get even more bang for the buck. It’s not like you have to do all these things, then you get benefits; you get benefits all along the way.
ECT: So how long, then, is the typical sales process?
Vanderzweep: It can be fairly short or fairly long, depending on how big a company gets into it. I could go to the VP of sales, and say, “Let’s consolidate and virtualize all the retail systems you have,” and just work within that organization. Typically, we’ll start within one organization, show how well it works, then help them move and share across different organizations. A project may last 30, 60, 120 days. Ultimately, to take on an entire good-size company, it’s a multi-year project, but the first project might be very, very quick.
One client, Belkin, is a company that is one of the lucky ones. They have been growing at 35 percent over the past three years. They try different ideas, and those different ideas have turned into pretty good growth for them. They had a set of infrastructure for their human resource system, for their ERP system, for their Web retail system that was dedicated to each of those areas. Because of their growth, they were dealing with performance problems, because they were pushing the limits of those infrastructures. They had more employees, so their HR system was getting pushed; they were selling that much more, so their retail systems were getting pushed, and they were shipping more, so their ERP systems were getting pushed. They were also getting high availability problems because their vendor at the time was probably not using best-in-breed products.
We came in and showed them our Virtual Server Environment, which has a whole set of capabilities, such as being able to share resources, being able to automatically move resources where they’re needed, when they’re needed.
Their current vendor couldn’t deliver the same capabilities that we have at HP with the Virtual Server Environment, so they elected to switch away from their previous vendor, implement the Virtual Server Environment using HP Superdome systems and layering this Virtual Server Environment software on top of it, and now they automatically have resources flowing from payroll to ERP to Web retail and back and forth again, as peaks and valleys happen to their business.
From a cost perspective, they actually pay a little bit less per month to HP than they did to their previous infrastructure vendor. And they’re experiencing 250 percent performance benefit, and their availability is much, much better than they had before because we incorporated high availability clustering.
ECT: Where would you say, then, the adoption is today in corporate America?
Vanderzweep: This is a fairly typical adoption. We sold several thousand Virtual Server Environments into the industry already, so those are taking off pretty well. Virtual Server Environment today revolves around our PA RISC and our Itanium systems, so it’s higher-end [systems] running mission-critical applications within a company. This kind of thing is getting to be very, very, very popular. A couple of years ago, I’d have said we were in early adoption stage. Now we’re in the early mainstream stage.
We sell bundles of servers into the industry. If I look at how many licenses I have for this, it’s about 50,000 CPU licenses out there. Obviously, we sell a heck of a lot more than 50,000 CPUs in a year, so it’s not everybody that’s getting it, but a good portion of our customer base gets it — at least 60 percent of our high-end customers get this kind of capability, for sure. With our low-end customers, the penetration rates are not quite as high.
ECT: So looking ahead — and tying it into your title, which includes both virtualization and utility computing — where is HP going? How about the industry?
Vanderzweep: If you look at our virtualization strategy and think about a graph with an X-Y axis, in the bottom right-hand corner, I’d label that element virtualization, that is the first step in virtualization. The middle box would be integrated virtualization, and up in the top corner — nirvanah — would be something we call the complete IT utility.
Element virtualization is absolutely mainstream. It’s hard not to find a customer who hasn’t, on an Intel server, used DM ware to partition that server into two machines. It’s hard not to find a customer who hasn’t put in a storage array instead of dedicated storage on a server-by-server basis. What element virtualization is all about, though, is virtualizing only one thing, cutting a server in half into two logical servers.
The next step on that graph, integrated virtualization, is where the innovation in the industry is now and certainly where our focus is. That’s where Virtual Server Environment [fits]. It uses those virtualization pieces, but instead of saying, “I need to divide this server into two,” it gets a lot of that automation that’s required. You simply say, “I need sub-second response time for my Web retail system, and I need two-second average response time for my ERP system and I need 30 minute turnaround for batch jobs for my HR system for payroll run.”
You tell the control software, the Virtual Server Environment software, the service levels that you need, and then it will keep moving resources around, changing the size of partitions on the fly to meet those service levels. You can see the difference where we were in the past with element virtualization: [There] we cut a 10 CPU server into two CPUs for Oracle, four CPUs for PDA, for instance. With integrated virtualization, you don’t specify CPUs.
When you get to the Complete IT Utility, that’s where all your data centers, all its resources, are automatically flowed to the right application, at the right time; all the server resources, network resources, storage resources and the software is automatically reprovisioned and moved around in a heterogeneous environment — Windows, Linux, HP/UX, whatever kind of operating system. That’s a little bit more complex to do.
We start them with the basic elements of virtualization, move them towards integrated in some projects and get multiple projects together and then finally move them toward Complete IT Utility.
ECT: And this is something HP already is doing for some clients — moving them to Complete IT Utility?
Vanderzweep: Yes. Primarily where we do the Complete IT Utility for a customer that looks at our portfolio of element, integrated and complete, they usually say, “You know HP, I want to go straight to the top right-hand corner — to the Complete IT Utility,” They will also say, “Ed, HP, since you’re already doing this in your datacenter, why don’t you manage my datacenter or outsource my datacenter and give me all those benefits?”
We’ve done things like that and been public about things like that for many customers — DreamWorks, for example, where we manage their infrastructure and, as they produce a film like “Shrek” or “Shrek 2,” they need to render a film, we do that for them and we charge them based on the number of frames rendered in the film. We’ve really connected up to their business.
Amadeus — you’re probably familiar with Sabre in North America, the booking system — does the same kind of thing in Europe. They came to HP and fell in love with the Complete IT Utility. They said, “Ok, we’re in the airline booking industry. We write software to do that. You, HP, are good at infrastructure. We get paid by the likes of Lufthansa — say, 25 cents — every time we book a seat and a customer actually sits in it. HP, you provide us with infrastructure that grows and shrinks based on supply and demand, and we’ll pay you 5 cents every time a customer sits on an airline seat.”
The more business they get, the more we have to scale that infrastructure up. The less business they get, the more we have to scale it down. Predominantly, if people want to go straight to the upper right corner, we do that through our managed service offering. We have huge amounts of customers who are doing element virtualization. I’d be surprised if I could find an enterprise HP customer that isn’t using some kind of virtualization. It’s the integrated stuff that probably 10 percent of our customer base is kicking the tires with. The Complete IT Utility is a smaller amount, but we do have a tremendous amount of business with our managed services group — companies like DreamWorks, Amadeus, Procter & Gamble, Ericsson — where we implement these capabilities for customers using the 400 datacenters that we’ve implemented.
ECT: Who do you generally encounter in competitive situations?
Vanderzweep: We definitely see IBM in there. Especially when you’re looking at heading off into integrated and the Complete IT Utility, it really requires you to coordinate and automate the provisioning of these resources — server, networking, storage, software — so the likes of HP and IBM are very well diversified in the IT industry, selling servers, storage, networking, etc. At HP, we have the ability to build things like the Virtual Server Environment, coordinate resources or go all the way up to the Complete IT Utility and manage a company’s environment.
We were out there talking about our vision for utility computing some years ago and we’ve brought a whole set of these products to market in the last two or three years, so we’ve got references after references after references. Execution is our biggest differentiator.
ECT: How about some smaller companies? Or companies like Sun?
Vanderzweep: Because this is a highly innovative space you’ll see lots of start-up companies out there that are making some big inroads. One start-up company was acquired by EMC a while ago — VMware — and they’re a good partner of ours versus a competitor. They provide the ability to virtualize an x86 Intel Opteron-type system and slice that up into smaller systems. They’re an interesting company.
We see a lot of other start-ups out there. I went to a venture capitalist conference a little ways back — this is a popular area for venture capitalists to invest in and for start-ups to design software and hardware around this area. If you look out, there are 50, 100 start-ups that have a unique piece of the puzzle here. Some of them, over the past few years have been bought up. We, ourselves, have acquired Talking Blocks, Consera, Novadigm and a few others to round out our portfolio of capabilities.
ECT: And I guess that underscores the growing mainstream nature of the market?
Vanderzweep: Oh yes, definitely. You’ve got a few other major players in the marketplace that are not as strong as HP or IBM because they’re not as diversified. You hear Sun talking a little bit, but they have a small portfolio of capabilities compared with the likes of HP. I don’t run into those guys very much. I go more head-to-head with IBM.
ECT: You mentioned heterogeneous environments and standardizing procedures and administration, but are there any technology standards issues that CIOs should be aware of when considering moving into or expanding their use of virtualization?
Vanderzweep: There are things like working groups like W3C and Oasis are working on, and we’re heavily invested into those standards organizations. Web services plays a big role in this because they make it much easier for applications to be compatible in this world, to move resources around. So we’ve been key to developing some of the Web services standards.
Grid services are now being built on top of Web services, and we’re very active in standardization of grid services as well. In fact, HP now holds the chair position in the Global Grid Forum. Standards are expensive initiatives, but they’re very fruitful as well, because HP likes to be able to build on top of standards, then add value to provide differentiation to the market place.
It’s the 80/20 rule: 80 percent of what the customer gets is standards-based infrastructure, then 20 percent is value-add on top of that, which really can differentiate them in the industry so they’re better than the company down the street. The more we standardize, the more we put into the 80 percent, allows us to innovate on top of that, and once it’s standardized, it reduces our cost and we can take our engineering efforts, our innovation efforts, and put them on top of that standard. It accelerates the industry. It differentiates us in the marketplace: It’s good for customers. It’s good for us.
ECT: I think every IT executive has a horror story about lack of standardization.
Vanderzweep: That’s always the case. The Virtual Server Environment; nobody else has got that kind of capability in the industry, but it’s built on top of standards. Where we’ve actually built it, we are working with other companies, standards organizations, etc., to try to take a chunk of our innovation and push it into standards organizations as well, so we can say, “Ok, we can now exit out of that area and move on to the next level of capability on top of the Virtual Server Environment.”
For us, our key areas in this space are storage — our storage grid innovations we’ve been talking about, in servers — our Virtual Server Environment, and we did some announcements just last month with virtualization and automation around our blade servers. We’ve worked with our own networking organization, with Cisco and others, on management of virtual networks, and then, of course, driving standards with Web services and grid services, especially through managing that through our OpenView software.