How should IT leaders scale virtualized environments so that they can be managed for elasticity payoffs? What should be taking place in virtualized environments now to get them ready for cloud efficiencies and capabilities later?
And how do service-oriented architecture (SOA), governance, and adaptive infrastructure approaches relate to this progression, or road map, from tactical virtualization to powerful and strategic cloud computing outcomes?
Here to help hammer out a typical road map for how to move from virtualization-enabled server, storage, and network utilization benefits to the larger class of cloud computing agility and efficiency values, we are joined by two thought leaders from HP: Rebecca Lawson, director of worldwide cloud marketing, and Bob Meyer, the worldwide virtualization lead in HP’s technology solutions group.
Listen to the podcast (37:00 minutes).
Here are some excerpts:
Rebecca Lawson: We’re seeing an acceleration of our customers to start to get their infrastructure in order — to get it virtualized, standardized, and automated — because they want to make the leap from being a technology provider to a service provider.
Many of our customers who are running an IT shop, whether it’s enterprise or small and mid-size, are starting to realize — thanks to the cloud — that they have to be service-centric in their orientation. That means they ultimately have to get to a place, where not only is their infrastructure available as a service, but all of their applications and their offerings are going in that direction as well.
Bob Meyer: A couple of years ago, people were talking about virtualization. The focus was all on the server and hypervisor. The real positive trend now is to focus on the service.
How do I take this infrastructure, my servers, my storage, and my network and make sure that the plumbing is right and the connectivity is right between them to be agile enough to support the business? How do I manage this in a holistic manner, so that I don’t have multiple management tools or disconnected pools of data?
What’s really positive is that the top-down service perspective that says virtualization is great, but the end point is the service. On top of that virtualization, what do I need to do to take it to the next level? And, for many people now, that next level they are looking at is the cloud, because that is the services perspective.
Lawson: A lot of people are trying to make a link between virtualization and cloud computing. We think there is a link, but it’s not just a straight-line progression. In cloud computing, everything is delivered as a service.
What’s really useful about cloud services like those is that they’re not necessarily used inside the enterprise, but what they are doing is they are causing IT to focus on the end-game. Very specifically, what are those business services that we need to have and that business owners need to use in order to move our company forward?
… We’re learning lesson from the big cloud service providers on how to standardize, where to standardize, how to automate, how to virtualize, and we’re using the lessons that we are seeing from the big-cloud service providers and apply them back into the enterprise IT shop.
Meyer: The cloud discussion is important, because it looks at the way that you consume and deliver services. It really does have broader implications to say that now as a service provider to the business, you have options.
Your option is not just that you buy all the infrastructure components. You plumb them together, monitor them, manage them, make sure they’re compliant, and deliver them. It really opens up the conversation to ask, “What’s the most efficient way to deliver the mix of services I have?”
The end result really is that there will be some that you build, manage, and manage the compliance on your own in the traditional way. Some of them might be outsourced to manage service providers. For some, you might source the infrastructure or the applications from the third-party provider.
… Then you start to understand the implications of shifting workloads, not losing specialty tools, and really getting to a point when you standardize. You could start to get to the point of managing a single infrastructure, understanding the costs better, and really be more effective at servicing and provisioning that. Standardizing has to happen in order to get there.
I’m not just talking about the server and hypervisor itself. You have to really look across your infrastructure, at the network, server and storage, and get to that level of convergence. How do I get those things to work together when I have to provision a new service or provide a service?
… You’re looking to source something for a service or you’re looking to pull assets together. Everybody will have some combination of physical and virtual infrastructure. So how do I take action when I need a compute resource, be it physical or virtual?
How do I know what’s available? How do I know how to provision it? How do I know to de-provision it? How do I see it if that’s in compliance?” All those things really only come through automation. From a bottom-up perspective, we look at the converged infrastructure, the automation capabilities, and the ability to standardize across that.
… When it’s gone beyond a server and hypervisor approach, and they’ve looked at the bigger picture, where the costs are actually being saved and pushed — then the light goes on, and they say, “Okay, there is more to it than just virtualization and the server.” You really do have to look, from an infrastructure perspective, at how you manage it, using holistic management, and how you connect them together.
Hopefully, at HP we can help make that progression faster, because we’ve worked with so many companies through this progression. But really it takes moving beyond the hypervisor approach, understanding what it needs to do in the context of the service, and then looking at the bigger picture.
Lawson: … Most IT organizations want to be aware and help govern what actually gets consumed. That’s hard to do, because it’s easy to have rogue activity going on. It’s easy to have app developers, testers, or even business people go out and just start using cloud services.
… [But] if IT is willing and able to step back and provide a catalog of all services that the business can access, that might include some cloud services. We try to encourage our customers to use the tools, techniques, and the approach that says, “Let’s embrace all these different kinds of services, understand what they are, and help our lines of business and our constituents make the right choice, so that they’re using services that are secure, governed, that perform to their expectations, and that don’t get them into trouble.”
We encourage our customers to start immediately working on a service catalog. Because when you have a service catalog, you’re forced into the right cultural and political behaviors that allow IT and lines of business to kind of sync up, because you sync up around what’s in the catalog.
There’s no excuse not to do that these days, because the tools and technologies exist to allow you to do that. At HP, we’ve been doing that for many years. It’s not really brand-new stuff. It’s new to a lot of organization that haven’t used it.
You can start to control, manage and measure across that hybrid ecosystem with standard IT management tools. … The organizing principle is the technology-enabled service. Then you can be consistent. You can say, “This external email service that we’re using is really performing well. Maybe we should look at some other productivity services from that same vendor.” You can start to make good decisions based on quantitative information about performance availability and security.
Dana Gardner is president and principal analyst at Interarbor Solutions, which tracks trends, delivers forecasts and interprets the competitive landscape of enterprise applications and software infrastructure markets for clients. He also produces BriefingsDirect sponsored podcasts. Follow Dana Gardner on Twitter. Disclosure: HP sponsored this podcast.