Data Management

Facebook Opens Door to New Data Center, Invites the World In

Facebook launched the Open Compute Project Thursday in a move that might reshape the IT hardware industry.

The project offers for public use tech specs and data about the custom-engineered technology developed for Facebook’s first dedicated data center in Prineville, Ore.

That technology has increased the energy efficiency in the data center by 38 percent while slashing costs by 24 percent, Facebook claims.

The tech specs and mechanical CAD files for the data center’s servers, power supplies, server racks, battery backup systems and building design are available at the Open Compute website.

“Overall, what we’ve done is figured out just one way to do large-scale computing affordably and efficiently, with a particular focus on developing solutions that accommodate our limited resources — both money and people — and our unwillingness to sacrifice computing power and efficiency,” Facebook spokesperson Jonny Thaw told TechNewsWorld.

That sharing of information will be good for companies with mega-data centers such as Facebook, Google and Yahoo, said Jim McGregor, chief technology strategist at In-Stat.

“In a lot of cases, people with very large data centers, like Yahoo, Facebook and Google, have to get custom-built solutions that meet their needs at a reasonable price point,” McGregor told TechNewsWorld.

“Facebook’s move is a push to say these large data centers need stuff more suited to their requirements in a more standardized way,” he added.

“Given Facebook’s needs are likely similar to what most companies will want when it comes to client-focused cloud computing, the result is likely better solutions sooner,” noted Rob Enderle, principal analyst at the Enderle Group.

“Facebook becomes an excellent lab where you can go and see what is, and what is not, working today,” Enderle told TechNewsWorld.

Open Compute Server Specs

The Open Compute Project servers have a chassis designed to accommodate a custom motherboard and power supply. It doesn’t use screws; it uses quick-release components. The motherboard snaps into place in mounting holes, and the chassis has snap-in rails for the hard drives to slide into the drive bay.

The chassis is designed for easy servicing; it has no sharp corners, no logos or stickers, no front panel and no paint. This saves about six pounds of material per server, according to Facebook’s estimates.

Facebook uses motherboards from AMD and Intel.

The AMD motherboard is a dual AMD Opteron 61 series socket motherboard with 24 DIMM (double inline memory module) slots. The Intel motherboard is a dual Intel Xeon 5500 or 5600 socket motherboard with 18 DIMM slots.

Both are power-optimized, barebones motherboards that lack many of the features found in traditional boards.

Open Compute uses a 450W single-voltage 12.5V DC self-cooled power supply with a closed frame. The AC/DC power converter includes independent connectors for AC input and DC output as well as a DC input connector for backup voltage. The design enables very high electrical efficiency. Current sharing and parallel operations capabilities have been excluded.

Specs for the Open Compute Data Center

Facebook’s data center uses an electrical system with a 48V DC UPS system integrated with a 277V AC server power supply.

It has a high-efficiency cooling system that uses the ambient air and evaporated water. Facebook has gotten rid of the centralized chillers.

The battery cabinet is a stand-alone independent cabinet that provides backup power at 48V DC nominal to a pair of triplet racks. This replaces the traditional inline UPS system normally used in data centers.

Open Compute servers are racked into three adjoining 42U columns. Each column contains 30 servers, for a total of 90 servers. One battery cabinet sits between a pair of triplet racks.

The batteries are sealed 12.5V DC nominal high-rate discharge batteries with a 10-year lifespan, the type commonly used in UPS systems. They are connected in a series of four elements for each group, or string, for a nominal string voltage of 48V DC. There are five strings in parallel in the cabinet.

When you Care Enough to Share the Very Best

Facebook says its Prineville data center has a PUE (Power Usage Effectiveness) of 1.073, meaning 93 percent of the energy from the grid makes it into its servers. The EPA-defined industry average is 1.51, Facebook says.

PUE is the ratio of the total amount of power used by a data center to the power delivered to computing equipment. It was created by The Green Grid, a global consortium of IT companies whose members include AMD, Dell, EMC, HP, Intel, Microsoft and Oracle.

Facebook codeveloped the technology used in its data center with Alfa Tech, AMD, Delta, Intel, Power-One and Quanta. It’s working with Dell, HP, Rackspace, Skype, Zynga and other companies on the next generation of technologies.

Facebook is publishing the specs and CAD files for the Open Compute project under the Open Web Foundation Agreement Version 1.0. This gives anyone the right to worldwide, non-exclusive, no-charge, royalty-free access to use the information.

“Infrastructure isn’t what differentiates us and isn’t what makes Facebook’s business. Opening the technology means there will be advances that we wouldn’t have discovered if we had kept this technology secret,” said Thaw.

“If we truly look under the covers of this thing, this is Facebook getting vendors to pay for much of their ecosystem in exchange for advocacy,” commented Enderle. “Not really a bad idea — but this isn’t Facebook going into the IT business except as an advocate.

While this approach conceals the real cost of the effort because of vendor involvement, he noted, it “can lead to a vastly better solution, although one tightly focused on Facebook.”

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

More by Richard Adhikari
More in Data Management

Technewsworld Channels