Data Management

Data Centers and the Push to Power Down

The next five years or so will spell big trouble for data centers.

About 46 percent of more than 150 IT professionals and executives surveyed earlier this year by the Business Performance Management Forum said they’re running out of space, power and cooling infrastructure for their data centers.

In the years leading up to 2011, data center operations will slam to a halt at more than 90 percent of all companies because of power failure and limits on the availability of power, according to a study by AFCOM, an association for data center professionals with more than 3,600 members worldwide.

A three-pronged attack has been launched to tackle these problems: Enterprises are redesigning their data centers to consume electricity more efficiently; vendors are unveiling more power-efficient servers; and power utilities are offering rebates for reduced power consumption or better data center design.

Why Can’t Data Centers Catch Up?

Two of the reasons the need for data centers’ services is outstripping their capacity are the explosion of unstructured data and a steep increase in retention time of data in recent years.

“Pretty much everything in applications people view nowadays is in the form of a file of unstructured data,” Jon Affeld, senior director of product marketing at BlueArc, told TechNewsWorld. “To compound that, you have to keep that data longer for compliance reasons — the demands of Sarbanes-Oxley, for example, or the Patriot Act — or in particular industries like healthcare and financial services.”

Power Shortages

There are two causes of power shortages — external and internal. Externally, the issue is the power grid and power supplies from utilities; internally, it’s a question of how efficiently power is used in the data centers and how power-efficient servers are.

More than 60 percent of data centers surveyed by AFCOM had at least two outages last year, and data center managers fear that the overall power available will not meet their needs in a few years.

However, that’s a problem of power distribution, not of availability: “We have a lot of generating capability as a country in particular regions but have not done as good a job as we could have done in distributing the power,” Rick Sawyer, a board member of the AFCOM Data Center Institute and executive vice president at Mission Critical Facilities, an HP company dedicated to designing, commissioning and operating data centers, told TechNewsWorld.

Another problem is deregulation of the industry, which let electricity producers charge a premium for power during peak hours and not produce any during other times because “you can get 10 to 20 times the returns when you sell during peak hours as compared to when you sell off-peak,” Sawyer said.

The View Inside

Power availability internal to data centers themselves depends on how the power is being used and distributed within the data center.

Existing data center operations are horribly inefficient — “85 percent of power coming in is used for heating and cooling systems, and going to waste through transformers,” Sawyer noted.

The solution: Treat a data center as an electron factory — “you bring in the raw material, which is electricity, and process it and put it in ones and twos as a binary data product,” he said. That’s when you can locate and eliminate causes of power wastage. Currently, data server architects want at least half the power coming in to supply the servers.

It’s Not All Bad

Here’s the tricky part: While data centers are consuming more power, they’re using the power much more efficiently. “The power consumption is going down per amount of work being done,” John Musilli, an AFCOM board member, told TechNewsWorld.

“Five or six years ago, I saw 1U servers introduced to data centers and one rack of 1U burns 9KW of power. There were 42 servers in that rack doing the work that 15 or 20 servers did earlier, so the density of the rack went up but the power requirements didn’t go up as much,” Musilli explained.

“I’ve seen it several times where we’ve had a power budget returned to us because the technology has increased so much on the storage and the server sides that the space comes back to us again and we have more capacity,” he added.

A 1U server is one rack unit 19 inches wide and 1.75 inches tall.

Vendors Throw Down

The move toward better power utilization has got vendors involved. In March, HP introduced its Data Center Transformation portfolio, which includes consulting and design services; data center consolidation services; data center virtualization services; the energy-efficient HP ProLiant DL785 G5 quad-core x86 server; and HP Insight Dynamics software that analyzes and optimizes physical and virtual resources.

Starting from the infrastructure level, HP tools let users “go to the assembly of devices into racks, then to layout best practices — you lay out cold rows and hot rows, using computational fluid dynamics to make sure that cool air remains where it should,” John Bennett, HP’s worldwide director, data center transformation solutions, told TechNewsWorld.

Cold rows are where you concentrate the air conditioning in your data center, and hot rows consist of equipment that can be maintained at higher temperatures.

HP also helps customers plan and design new, energy-efficient data centers.

Meanwhile, BlueArc, which makes high-end network-attached storage servers, has just announced its 3000 series, which is “twice as fast and scalable as the previous generation but consumes the same amount of power — about 25 percent of what an equivalent product from standard server vendors would,” Affeld said.

BlueArc also offers a total cost of ownership service in which it audits a customer’s existing infrastructure and models different scenarios to show them how much labor, cooling and rack space they can save.

PG&E’s Efforts

A major utility providing electricity and gas to customers in northern and central California, PG&E is among the leaders in offering incentives to enterprises to go green.

“We have many programs that handle construction of new data centers and retrofits, and we also offer retro commissioning, which means that you go out and look at the control schemes, mainly of air conditioning systems,” Randall Cole, senior program manager at PG&E’s high tech energy efficient department, told TechNewsWorld.

The utility also pays customers incentives based on energy savings against its baseline.

While its programs are “very good at air conditioning, which seems to be our bread and butter,” PG&E is finding that many of its customers are running out of space or power or cooling, so two years ago, it began helping customers who virtualize their applications, he said.

“A typical application running on a server uses only about 5 to 10 percent of that server’s capacity and servers have got so cheap that an IT director will just buy one and port just one application to it,” Cole added. With virtualization, you get “anywhere from 10:1 consolidation or higher, and the highest I’ve ever seen is 28:1.”

PG&E is also exploring the possibility of moving virtualized applications off lesser-used servers during off-peak hours and in the future, that could “get to the point that you can move the applications to another country,” Cole said.

Money Still Talks

With rising gas and oil prices, and the fact that brownouts and blackouts could become worse and more frequent, what should a data center manager do?

Well, we could always pay more. “Data centers will always have as much power as they want, it’ll just cost more,” Musilli said. “The only reason you have a brownout is you’re not paying enough for your power. If you need the power, you can probably get it by paying more.”

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

More by Richard Adhikari
More in Data Management

How confident are you in the reliability of AI-powered search results?
Loading ... Loading ...

Technewsworld Channels