Computing

IBM Plans Massive Computer System to Digest Big Telescope Data

IBM is teaming up with The Netherlands Institute for Radio Astronomy, otherwise known as “Astron,” on a five-year project to look into very fast, low-power exascale computer systems for the world’s largest and most sensitive radio telescope.

The project, to be called “DOME,” will cost about US$44 million.

It will investigate emerging technologies for large-scale and efficient exascale computing, data transport and storage processes, as well as streaming analytics that will be required to read, store and analyze all the raw data that will be collected daily by the Square Kilometre Array, as the radio telescope is called.

The SKA will gather several exabytes of raw data daily. An exabyte is 1 million terabytes.

“Astron will be focused on the antenna; IBM will look at exascale computing, in which we need to address energy consumption, cost and space,” Christopher Sciacca, spokesperson for IBM Research Zurich, told TechNewsWorld.

“IBM and the SKA are investing where it matters — driving down costs of closely coupled processing by driving up the efficiency of moving and storing data,” Joshua Bloom, an associate professor at U.C. Berkeley‘s astronomy department, told TechNewsWorld.

Let’s Talk Techie

The DOME project is a preliminary phase in the SKA project, and the next five years will see IBM and Astron “building a technological roadmap based on technologies that we already have in development, such as 3D chip stacking and phase-change memory,” Sciacca said.

The SKA will consist of millions of antennae spanning an area of more than 3,000 km — approximately the width of the continental United States — forming a collection area equivalent to one square kilometer. It will be 50 times more sensitive than any radio device and more than 10,000 times faster than today’s instruments.

IBM is considering nanophotonics to transport to data, IBM Zurich’s Sciacca said. Also, it needs to determine if the processing will be done on the antenna or in a data center.

Nanophotonics is the study of the behavior of light on the nanometer scale. It can make for highly power-efficient devices for engineering applications.

“What’s so inspiring about the project is that IBM is looking at technology that doesn’t yet exist,” Darren Hayes, CIS program chair at Pace University, told TechNewsWorld. “The project will also enable IBM to showcase their development of 3D stacked chips that will be used to handle the massive processing requirements. If successful, the project could propel IBM to the forefront of 3D microchips.”

What’s a 3D Chip?

3D chip stacking simply means that chip components are mounted vertically to achieve greater density and higher performance. Stacking chips “will reduce energy because the data no longer travels 10 centimeters, but less than a millimeter,” IBM Zurich’s Sciacca said. “Ninety-eight percent of the energy in a data center is used for moving data, and 2 percent makes up the computations.”

stacking integrated circuits is something companies like Irvine Sensors have been doing for about 20 years or so. Irvine is working on the project of putting whole systems — computers, data recorders and signal processors — in cubes, which is the next step. That project is sponsored by the Defense Advanced Research Project Agency (DARPA), the United States Army and the U.S. Missile Defense Agency.

Back in 2007, IBM built chip stacks connected by metal links formed by drilling through each die’s silicon and filling the resulting holes with metal, replacing the wires normally used. These reduced the distance signals needed to travel between dies by a factor of 1,000 and enabled a hundredfold increase in the number of links that could be established between dice, the company claimed.

Stacked, or 3D, chips can generate quite a bit of heat. IBM has stated they have an aggregated heat dissipation of nearly 1 Kw, which is 10 times greater than the heat generated by a hotplate, in an area measuring only four square centimeters and 1mm thick. So in 2008, researchers at IBM Zurich and the Fraunhofer Institute in Berlin, Germany, came up with the concept of running water through the stacks to cool them.

Data Headaches

The data produced by the SKA will “be particularly challenging because it needs to be cross-correlated with itself, so the very nature of the initial workflow does not lend itself to embarrassing parallelism,” UC Berkeley’s Bloom pointed out.

Hadoop and “geographically sharded databases work well because some forms of computation can be done locally, then aggregated and summarized centrally,” Bloom added. “SKA data needs to be collocated and analyzed, at some level, as a single entity.”

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

How confident are you in the reliability of AI-powered search results?
Loading ... Loading ...

Technewsworld Channels