We’ve all heard about client virtualization or virtual desktop infrastructure (VDI) over the past few years, and there are some really great technologies for delivering a PC client experience as a service.
But today’s business and economic drivers need to go beyond just good technology. There also needs to be a clear rationale for change — both business and economic. Second, there needs to be proven methods for properly moving to client virtualization at low risk and in ways that lead to both high productivity and lower total costs over time.
Cloud computing, mobile device proliferation, and highly efficient data centers are all aligning to make it clear that the deeper and flexible client platform support from back-end servers will become more the norm and less the exception over time.
Client devices and application types will also be dynamically shifting both in numbers and types, and crossing the chasm between the consumer and business spaces. The new requirements for business mobile use point to the need for planning and proper support of the infrastructures that can accommodate these edge, wireless clients.
To help guide business on client virtualization infrastructure requirements, learn more about client virtualization strategies and best practices that support multiple future client directions, and see why such virtualization makes sense economically, we went to Dan Nordhues, marketing and business manager for client virtualization solutions in HP’s Industry Standard Servers Organization. The interview is conducted by BriefingsDirect’s Dana Gardner, principal analyst at Interarbor Solutions.
Listen to the podcast (33:00 minutes).
Here are some excerpts:
Dan Nordhues: In desktop virtualization, what really comes out to the user device is just pixel information. These protocols just give you the screen information, collect your user inputs from the keyboard and mouse, and take those back to the application or the desktop in the data center.
When you look at desktop virtualization, whether it’s a server-based computing environment, where you are delivering applications, or if you are delivering the whole desktop, as in VDI, to get started you really have to take a look at your whole environment — and make sure that you’re doing a proper analysis and are actually ready.
On the data center side, as we start talking about cloud, the solution is really progressing. HP is moving very strongly toward what we call converged infrastructure, which is wire it once and then have it provisioned and be ready to provide the services that you need. We’re on a path where the hardware pieces are there to deliver on that.
But you have to look at the data center and its capacity to house the increased number of servers, storage, and networking that has to go there to support the user.
So now you get the storage folks in IT, the networking folks, and the server support folks all involved in the support of the desk-side environment. It definitely brings a new dynamic.
This is not a prescription for getting rid of those IT people. In fact, there is a lot of benefit to the businesses by moving those folks to do more innovation, and to free up cycles to do that, instead of spending all those cycles managing a desktop environment that may be fairly difficult to manage.
Where we’re headed with this, even more broadly than VDI, is back to the converged infrastructure, where we talked about wire it once and have it be a solution. Say you’re an office worker and you’re just getting applications virtualized out to you. You’re going to use Microsoft Office-type applications. You don’t need a whole desktop. Maybe you just need some applications streamed to you.
Maybe, you’re more of a power user, and you need that whole desktop environment provided by VDI. We’ll provide reference architectures with just wire it once type of infrastructure with storage. Depending on what type of user you are, it can deliver both the services and the experience without having to go back and re-provision or start over, which can take weeks and months, instead of minutes.
Also, really a hybrid solution could deliver in the future VDI plus server-based computing together and cover your whole gamut of users, from the very lowest task-oriented user, all the way up to the highest end power users that you have.
And we’re going to see services wrapped around all of this, just to make it that much simpler for the customers to take this, deploy it, and know that it’s going to be successful.
Why VDI Now?
It’s a digital generation of millions of new folks entering the workforce, and they’ve grown up expecting to be mobile and increasingly global. So, we need to have computing environments that don’t have us having to report to a post number in an office building in order to get work done.
We have an increasingly global and mobile workforce out there. Roughly 60 percent of employees in organizations don’t work where their headquarters are for their company, and they work differently.
When you go mobile, you give up some things. However, the major selling point is that you can get access. You can check in on a running process, if you need to see how things are progressing. You can do some simple things like go in and monitor processes, call logs, or things like that. Having that access is increasingly important.
And, of course, there’s the impact of security, which is always the highest on customer lists. We have customers out there, large enterprise accounts, who are spending north of (US)$100 million a year just to protect themselves from internal fraud.
With client virtualization, the security is built in. You have everything in the data center. You can’t have users on the user endpoint side, which may be a thin client access device, taking files away on USB keys or sticks.
It’s all something that can be protected by IT, and they can give access only to users as they see fit. In most cases, they want to strictly control that. Also, you don’t have users putting applications that you don’t want … on top of your IT infrastructure.
And there is really a catalyst coming as well in the Windows 7 availability and launch since late last year. Many organizations are looking at their transition plans there. It’s a natural time to look at a way to do the desktop differently than it has been done in the past.
Reference Architectures Support All Clients
We’ve launched several reference architectures and we are going to continue to head down this path. A reference architecture is a prescribed solution for a given set of problems.
For example, in June, we just launched a reference architecture for VDI that uses some iSCSI SAN storage technology, and storage has traditionally been one of the cost factors in deploying client virtualization. It has been very costly to deploy Fibre Channel SAN, for example. So, moving to this iSCSI SAN technology is helping to reduce the cost and provide fantastic performance.
In this reference architecture, we’ve done the system integration for the customer. A lot of the deployment issue, and what makes this difficult, is that there are so many choices. You have to choose which server to use and from which vendor: HP, Dell, IBM or Cisco? Which storage to choose: HP, EMC or NetApp? Then you have got the software piece of it. Which hypervisor to use: Microsoft, VMware or Citrix? Once you chase all these down and do your testing and your proof of concept, it can take quite a substantial length of time.
We targeted the enterprise first. Some of our reference architectures that are out there today exist for 1,000-plus users in a VDI environment. If you go to some of the lower-end offerings we have, they are still in the 400-500 range.
We’re looking at bringing that down even further with some new storage technologies, which will get us down to a couple of hundred users, the small and medium business (SMB) market, certainly the mid-market, and making it just very easy for those folks to deploy. They’ll have it come completely packaged.
Today, we have reference architectures based on VDI or based on server-based computing and delivering just the applications. As I mentioned before, were looking at marrying those, so you truly have a wire-it-once infrastructure that can deliver whatever the needs are for your broad user community.
What HP has done with these reference architectures is say, “Look, Mr. Customer, we’ve done all this for you. Here is the server and storage and all the way out to the thin client solution. We’ve tested it. We’ve engineered it with our partners and with the software stack, and we can tell you that this VDI solution will support exactly this many knowledge workers or that many productivity users in your PC environment.” So, you take that system integration task away from the customer, because HP has done it for them.
Dana Gardner is president and principal analyst at Interarbor Solutions, which tracks trends, delivers forecasts and interprets the competitive landscape of enterprise applications and software infrastructure markets for clients. He also produces BriefingsDirect sponsored podcasts. Follow Dana Gardner on Twitter. Disclosure: HP sponsored this podcast.