Data Management

IBM to Build Super-Storage Phenom

IBM is working on a 120-petabyte storage array that will consist of 200,000 disk drives, according to the MIT Technology Review.

The array is expected to store about 1 trillion files.

It’s being developed for a client that needs a new supercomputer for detailed simulations of real-world phenomena.

Storing the metadata about the information in the array — the names, types and other attributes of the files in the system — will require about 2 PB.

The largest arrays available today are reported to be about 15 PB.

IBM spokesperson Ari Entin confirmed the existence of the superstorage project, but researchers were not available to provide comment in time for the publication of this article.

Issues With Big Storage

Building an array of this size requires solving several technical issues, and it could pose unique problems to users.

“Much of what has been learned in the data center over the past 30 years is only partially relevant to answering the question of whether it makes sense to put 100 petabytes into a single basket,” Jay Heiser, a research vice president at the Gartner Group, told TechNewsWorld.

“The bigger the drive, the harder your data falls,” Heiser said.

For example, a problem with an EMC DMX-3 storage array in a data center outsourced to Northrop Grumman brought down the systems of 27 agencies in the State of Virginia earlier this month.

“While there are advantages to putting a lot of eggs into bigger baskets, eventually a point is reached at which further increases in basket size are counterproductive,” Heiser warned.

Making It Work

IBM’s engineers reportedly developed a series of new hardware and software techniques for the hypermassive storage array, including wider storage racks and a water-cooling system.

Using water instead of air to cool the storage array is a reasonable approach because IBM “has a lot of experience” with water-cooling systems for mainframes, David Hill, principal at the Mesabi Group, told TechNewsWorld.

In order to keep working when disks fail, the system reportedly not only stores multiple copies of data on different disks, but also pulls data from other drives when one goes down, and writes that data to a replacement drive slowly so the computer the array serves has enough capacity to continue working.

If more disks nearby fail, the rebuilding process speeds up.

The super-array uses IBM’s General Parallel File System (GPFS), a highly scalable clustered parallel file system that spreads individual files across multiple disks so the computer can read or write multiple parts of a file simultaneously.

In July, IBM researchers used the GPFS system running on a cluster of 10 eight-core systems and using solid state storage to scan 10 billion files on one system in 43 minutes. The previous record, set by IBM researchers in 2007, saw 1 billion files scanned in three hours.

Million-Year Guarantee

“IBM’s built some of the highest-end supercomputers in the industry, so it knows how to do this stuff,” Joe Clabby, president of Clabby Analytics, told TechNewsWorld.

What about disk failure, the perennial bugbear of storage arrays?

“I don’t see how there can be any sort of storage system that doesn’t sometimes become corrupted, and require not just restoration, but sometimes reconstruction,” Gartner’s Heiser remarked.

“IBM has developed techniques, which it has employed on at least one existing storage system, to compensate for the fact that individual drives will be failing on a more or less continuous basis,” Hill said.

“That is why it can claim that a million years can go by without loss of data and no compromise in performance,” Hill added.

It’s likely that users won’t be allowed to replace totally dead drives, and rebuilds of failed drives will be done on spare drives, Hill said. Also, IBM will probably use disk grooming techniques — deleting old and unnecessary files on disks.

“As long as not too large a percentage of disks fail, there is really no need to physically replace them,” Hill pointed out. “Few disk systems are fully utilized anyway, so a percentage or two of failed disks should not be a problem.”

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

More by Richard Adhikari
More in Data Management

Technewsworld Channels