In advance of the VMworld conference in San Francisco, Dana Gardner sat down with Steve Herrod, Chief Technology Officer and Senior Vice President of Research & Development at VMware.
Their discussion hinges on the intriguing concept of the software-defined datacenter. We look at how some of the most important attributes of datacenter capabilities and performance are now squarely under the domain of software enablement.
A top technology leader at VMware, Herrod has championed this vision of the software-defined datacenter and how the next generation of foundational IT innovation is largely being implemented above the hardware.
For example, those who are now building and managing datacenters are gaining heightened productivity, delivering far better performance, and enjoying greater ease in operations and management — all thanks to innovations at the software-infrastructure level.
Join the discussion here and further explore how advances in datacenter technologies and architecture are — to an unprecedented extent — being driven primarily through software. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]
Download the podcast (30:12 minutes) or use the player:
Here are some excerpts:
Dana Gardner: We’ve heard a lot over the decades about improving IT capabilities and infrastructure management, but it seems that many times we peel back a layer of complexity and we get some benefits, and we find ourselves like the proverbial onion, back at yet another layer of complexity.
Complexity seems to be a recurring inhibitor. I wonder if this time we’re actually at a point where something is significantly different. Are we really gaining ground against complexity at this point?
Steve Herrod: It’s a great question, because complexity is associated with IT and why we’ll do it differently this time. I see two things happening right now that give us a great shot at this.
One is purely on expectations. All of the opportunities we have as consumers to work with cloud computing models have opened up our imagination as to what we should expect out of IT and computing datacenters, where we can sign up for things immediately, get things when we want them, and pay for what we use. All those great concepts have set our expectations differently.
A Good Shot
Simultaneously, a lot of changes on the technology side give us a good shot at implementing it. When you combine technology that we’ll talk about with the loosened-up imagination on what can be, we’re in a great spot to deliver the software-defined datacenter.
Gardner: You mentioned cloud and this notion that it’s a liberating influence. Is this coming from the technologists or from the business side? Is there a commingling on that concept quite yet?
Herrod: It’s funny. I see it coming from the business side, which is the expectation of an individual business unit launching a product. They now have alternatives to their own IT department. They could go sign up for some sort of compute service or Software as a Service (SaaS) application. They have choices and alternatives to circumvent IT. That’s an option they didn’t have in the past.
Fundamentally, it comes down to each of us as individuals and our expectations. People are listening to this podcast when they want to, quickly downloading it. This also applies to signing up for email, watching movies, and buying an app on an app store. It’s just expected now that you can do things far more agilely, far more quickly than you could in the past, and that’s really the big difference.
Gardner: Tech users are getting higher expectations based on what they encounter on their consumer side of technology consumption. We see what the datacenters are capable of from the likes of Google and Facebook. Is it possible for enterprises to also project that sort of productivity and performance onto what they’re doing, and maybe now that we’ve gone through an iteration of these vast datacenters, to do it even better?
Herrod: I have a lot of friends at Facebook, Zynga, and Google, running the datacenters there, and what’s exciting for me is that they have built a fully software-defined datacenter. They’re doing a lot of the things we are talking about here. But there are two unique things about their datacenters.
One is that they have hundreds or even thousands of PhDs who are running this infrastructure. Second, they’re running it for a very specific type of application. To run on the Google datacenter, you write your applications a very specific way, which is great for them. But when you go into the business world, they don’t have legions of people to run the infrastructure, and they also have a broad set of applications that they can’t possibly consider rewriting.
So in many ways, I see what we’re doing is taking the lesson learned in those software-defined datacenters, but bringing it to the masses, and bringing it to companies to run all of their applications and without all of the people cost that they might need otherwise.
Gardner: Let’s step back for some context. How did we get here? It seems that hardware has been sort of the cutting edge of productivity, when we think of Moore’s Law and we look at the way that storage, networks, and server architecture have come together to give us the speeds and feeds that have led to a lot of what we take for granted now. Let’s go through that a little bit and think about why we’re at a point where that might not be the case anymore.
Herrod: I like to look at how we got to where we are. I think that’s the key to understanding where we’re likely to go from here.
History of IT Decisions
We started VMware out of a university, where we could take the time to study history and look at what had happened. I liked looking at existing datacenters. You can look through the datacenter and see the history of IT decisions of the past.
It’s traditionally been the case that a particular new need led the IT department to go out and buy the right infrastructure for that new need, whether it’s batch processing, client/server applications, or big Web farms. But these individually made decisions ended up creating the silos that we all know about that exist all over datacenters.
They now have the group that manages the mainframe, the UNIX administration group, and the client PC group, and none of them is using common people or common tools as much as they certainly would like to. How we got to where we are were isolated decisions for the right thing at the right time, without recognizing the opportunity to optimize across a broader set of the datacenter.
The whole concept of software-defined datacenters is looking holistically at all of the different resources you have and making them equally accessible to a lot of different application types.