Producing meaningful, long-term energy savings in IT operations depends on a strategic planning and execution process.
The goal is to seek out long-term gains from prudent, short-term investments, whenever possible. It makes little sense to invest piecemeal in areas that offer poor returns, when a careful cost-benefit analysis for each specific enterprise can identify the true wellsprings of IT energy conservation.
The latest BriefingsDirect podcast discussion therefore targets significantly reducing energy consumption across data centers strategically. In it we examine four major areas that result in the most energy policy bang for the buck — virtualization, application modernization, data-center infrastructure best practices, and properly planning and building out new data-center facilities.
By focusing on these major areas, but with a strict appreciation of the current and preceding IT patterns and specific requirements for each data center, real energy savings — and productivity gains — are in the offing.
To help learn more about significantly reducing energy consumption across data centers, we welcome two experts from HP: John Bennett, worldwide director, Data Center Transformation Solutions; and Ian Jagger, worldwide marketing manager for Data Center Services. The discussion is moderated by me, BriefingsDirect’s Dana Gardner, principal analyst at Interarbor Solutions.
Listen to the podcast (46:21 minutes).
Here are some excerpts:
John Bennett: We, as an industry, are full of advice around best practices for what people should be taking a look at. We provide these wonderful lists of things that they should pay attention to — things like hot and cold aisles, running your data center hotter, and modernizing your infrastructure, consolidating it, virtualizing it, and things of that ilk.
The mistakes that customers do make is that they have this laundry list and, without any further insight into what will matter the most to them, they start implementing these things.
The real opportunity is to take a step back and assess the return from any one of these individual best practices. Which one should I do first and why? What’s the technology case and what’s the business case for them? That’s an area that people seem to really struggle with. …
We know very well that modern infrastructure, modern servers, modern storage, and modern networking items are much more energy efficient than their predecessors from even two or three years ago. …
If we look at the total energy picture and the infrastructure itself — in particular, the server and storage environment — one of the fundamental objectives for virtualization is to dramatically increase the utilization of the assets you have.
With x86 servers, we see utilization rates typically in the 10 percent range. So, while there are a lot interesting benefits that come from virtualization from an energy efficiency point of view, we’re basically eliminating the need for a lot of server units by making much better use of a smaller number of units.
So, consolidation and modernization, which reduces the number of units you have, and then multiplying that with virtualization, can result in significant decreases in server and storage-unit counts, which goes a long way toward affecting energy consumption from an infrastructure point of view.
That can be augmented, by the way, by doing application modernization, so you can eliminate legacy systems and infrastructure and move some of those services to a shared infrastructure as well.
We’re talking about collapsing infrastructure requirements by factors of 5, 6, or 10. You’re going from 10 or 20 old servers to perhaps a couple of servers running much more efficiently. And, with modernization at play, you can actually increase that multiplication.
These are very significant from a server point of view on the storage side. You’re eliminating the need for sparsely used dedicated storage and moving to a shared, or virtualized storage environment, with the same kind of cost saving ratios at play here. So, it’s a profound impact in the infrastructure environment.
Ian Jagger: Going back to the original point that John made, we have had the tendency in the past to look at cooling or energy efficiency coming from the technology side of the business and the industry. More recently, thankfully, we are tending to look at that in a more converged view between IT technology, the facility itself, and the interplay between the two. …
Each customer has a different situation from the next, depending on how the infrastructure is laid out, the age of the data center, and even the climatic location of the data center. All of these have enormous impact on the customer’s individual situation. …
If we’re looking, for example, at the situation where a customer needs a new data center, then it makes sense for that customer to look at all the cases put together — application modernization, virtualization, and also data center design itself.
Here is where it all stands to converge from an energy perspective. Data centers are expensive things to build, without doubt. Everyone recognizes that, and everybody looks at ways not to build a new data center. But the point is that a data center is there to run applications that drive business value for the company itself.
What we don’t do a good job of is understanding those applications in the application catalog and the relative importance of each in terms of priority and availability. What we tend to do is treat them all with the same level of availability. That is just inherent in terms of how the industry has grown up in the last 20 to 30 years or so. Availability is king. Well, energy has challenged that kingship if you like, and so it is open to question.
Now, you could look at designing a facility, where you have within the facility specific PODs (groups of compute resources) that would be designed according to the application catalog’s availability and priority requirements, tone down the tooling infrastructure that is responsible for those particular areas, and just retain specific PODs for those that do require the highest levels of availability.
Just by doing that, by converging the facility design with application modernization, takes millions and millions of dollars of data center construction costs, and of course the ongoing operating costs derived from burning energy to cool it at the end of the day. …
One of the smartest things you can actually do as a business, as an IT manager, is to actually go and talk to your utility company and ask them what rebates are available for energy savings. They typically will offer you ways of addressing how you can improve your energy efficiency within the data center.
That is a great starting point, where your energy becomes measurable. Taking an action on reducing your energy, not just hits your operating cost, but actually allows you to get rebates from your energy company at the same time. It’s a no-brainer.
John Bennett: What we are advising customers to do is take a more complete view of the resources and assets that go into delivering business services to the company.
It’s not just the applications and the portfolio. … It’s the data center facilities themselves and how they are optimized for this purpose — both from a data center perspective and from the facility-as-a-building perspective.
In considering them comprehensively in working with the facilities team, as well as the IT teams, you can actually deliver a lot of incremental value — and a lot of significant savings to the organization. …
For customers who are very explicitly concerned about energy and how to reduce their energy cost and energy consumption, we have an Energy Analysis Assessment service. It’s a great way to get started to determine which of the best practices will have the highest impact on you personally, and to allow you to do the cherry-picking.
For customers who are looking at things a little more comprehensively, energy analysis and energy efficiency are two aspects of a data-center transformation process. We have a data center transformation workshop.
Jagger: The premise here is to understand possible savings or the possible efficiency available to you through forensic analysis and modeling. That has got to be the starting point, and then understanding the costs of building that efficiency.
Then, you need a plan that shows those costs and savings and the priorities in terms of structure and infrastructure, have that work in a converged way with IT, and of course the payback on the investment that’s required to build it in the first place.
Dana Gardner is president and principal analyst at Interarbor Solutions, which tracks trends, delivers forecasts and interprets the competitive landscape of enterprise applications and software infrastructure markets for clients. He also produces BriefingsDirect sponsored podcasts. Follow Dana Gardner on Twitter. Disclosure: HP sponsored this podcast.