Computing

The Secret Lives of Supercomputers, Part 1

Since the first supercomputers came online in the 1960s and ’70s, they have earned a reputation as high-powered workhorses helping researchers conduct complex calculations.

Typically found at major universities and research facilities, the massive machines — which at one time could occupy more than an acre of space in a data center — were often used in science: quantum mechanical physics, molecular modeling or mapping the human genome. Some jobs were less esoteric: IBM’s Deep Blue earned fame in the chess world as an opponent of grand master Garry Kasparov.

However, supercomputers that once were within the sole purview of governments, major universities and large enterprises are now being put to work solving practical problems for businesses large and small.

Not Just for the Patched Elbow Types Anymore

The Top 500 Supercomputers project has tracked and listed the world’s fastest supercomputers twice a year for the past 15 years. Of the 500 leading high-performance computing systems in the world today, 257 are located in the U.S., according to the list.

The world’s most powerful supercomputer, IBM’s Roadrunner, was clocked with a sustained processing rate of a whopping 1.026 petaflops (one quadrillion floating point operations per second). Located at the Los Alamos National Laboratory in New Mexico, Roadrunner is based on IBM QS22 blades and powered by Opteron and advanced versions of the Cell chip, the microprocessor found in Sony’s PlayStation 3 video game console.

With advances in microprocessors as well as computer and server designs, supercomputers have now been put to work solving practical problems.

“It is probably the biggest trend in supercomputers — the movement away from ivory-tower research and government-sponsored research to commerce and business,” Michael Corrado, an IBM spokesperson, told TechNewsWorld. In 1997, there were 161 supersystems deployed in business and industry, but that figure grew to 287 by June 2008, he noted. “More than half the list reside in commercial enterprises. That’s a huge shift, and it’s been under way for years.”

People use supercomputers because they gain a competitive advantage, Jack Dongarra, a professor of electrical engineering and computer science at the University of Tennessee, told TechNewsWorld.

“Industry can no longer compete in the traditional cost and quality terms. They have to look for innovation, and they do that by using supercomputers, so it really becomes one of the key ingredients to their innovative capacity, production, growth — and really affects our standard of living today,” he said.

The Real Life

Supercomputers have played a major role in industries including oil, energy, defense, pharmaceutical and aerospace.

“Almost every area of industry today is using supercomputers. They use them to predict traffic patterns and many other things. They are used in many ways that affect our lives in real ways,” said Dongarra. Something that touches virtually everyone is the weather, and supercomputers at the National Weather Service are used to create the forecasts.

“When you turn on the TV set at night and listen to the news, and the weather guy tells you what the weather is going to be, that’s based on a supercomputer forecast using physics and mathematics. High-performance computers predict what’s going to happen tomorrow, in the next three days and the week to come,” Dongarra said.

Other supercomputer applications, particularly climate modeling and genome mapping, have also taken on daily relevance. Climate modeling drives international policy and opinion with respect to climate change, and genome mapping leads directly to new medical therapies, Tom Halfhill, an In-Stat analyst, told TechNewsWorld.

Field Work

Further relevant applications for supercomputers include research into new drugs, oil exploration, financial modeling, and nuclear-weapons testing.

“In drug discovery, supercomputers simulate protein folding to determine if new drugs will couple with the proteins related to diseases and disorders,” Halfhill noted.

Oil companies rely on supercomputers for exploration, using them to analyze sonar data to detect underground oil and gas. This analysis is becoming increasingly important as deposits become harder to find.

In finance, supercomputers analyze data for stock markets, commodity markets, and currency exchanges to determine the best investment strategies. Often the calculations are rushed overnight to drive trading on the next business day.

Supercomputers can simulate the viability of aging nuclear warheads without the need to detonate them, which is important because live tests are is forbidden by international treaties, Halfhill pointed out.

As the cost of supercomputers has dropped, small and medium-sized enterprises have begun putting them to work on less lofty projects.

“This mainstreaming of supercomputing will only continue,” Gordon Haff, principal IT analyst at Illuminata, told TechNewsWorld. “The very largest machines will still tend to be used for scientific research or other government-funded applications, but there’s plenty of computing being applied to corporate research and even direct business applications as well.”

Supercomputers Ready for Their Close-Up

Even Hollywood has gotten into the act. The advent of computer-generated imaging and the power needed to create ever more realistic digital illustrations has led movie directors to roll out the red carpet for supercomputers.

In the “Lord of the Rings” movies, for example, much of the image-rendering was done on a supercomputer in New Zealand. “Supercomputers are used more and more for image processing,” Dongarra said. “They have to do a lot of it, and they have to be very fast about it because they want to finish the movie in a timely fashion.”

Once-Pricey Calculations

In 1997, the year that Garry Kasparov beat IBM’s Deep Blue in chess, the cost of processing 1 million operations per second — one megaflop — was US$50. Deep Blue performed about 11.38 gigaflops, more than 11 billion calculations per second, at a cost of $550,000, according to Corrado.

“Today one mega[flop] costs 10 cents, or $1,100 for the same capacity that Deep Blue needed $550,000 for. The technology is becoming more prevalent, more ubiquitous, the processors are becoming multi-core and more able to do it. And you’ve got people that have been doing this for a while and are riding this price performance curve. As the price of supercomputing power comes down, it’s interesting what even experienced users are doing with their systems,” he said.

Time sharing amortizes the cost of the largest supercomputers, Halfhill said. There are also options for organizations for which 10 cents a megaflop is still steep.

“Smaller supercomputers are becoming more affordable and often are owned by individual companies for their exclusive use. As the cost of supercomputing declines, more of these machines become available for broad use, and more applications become practical for them,” he explained.

Practically Computing

One interesting supercomputer trend is data mining — analyzing old data to find new things.

As the cost of supercomputing declines, oil companies are retrieving sonar records from five or 10 years ago and reexamining them to find deposits of oil and natural gas that may have been missed the first time around, when heavy-duty computer resources were either unavailable or too expensive, Halfhill said.

Even though the very largest supercomputers are few and far between, high-performance computer clusters — themselves supercomputers by any reasonable historical standard — are actually quite common, said Haff. “They’re used for all manner of simulations, business analytics, and product design applications. Even a consumer goods company like P&G [Proctor & Gamble] uses high performance computing extensively.”

Proctor & Gamble used its supercomputer to solve a pesky manufacturing problem with Pringles chips. The snack maker had an issue with the design of the popular potato chip — its design caused it to fly off the line during manufacturing, Corrado said.

“They weren’t aerodynamic enough,” he explained. “P&G used a supercomputer to do an aerodynamic simulation and now produces Pringles that are aerodynamically sound and do not fly off of the assembly line.”

Supercomputers are also used to examine diaper material so diapers can be made more absorbent and environmentally friendly.

“These are applications that would not have been done four or five years ago because they were cost-prohibitive. They are becoming more widespread as the cost has come down,” Corrado pointed out.

The Secret Lives of Supercomputers, Part 2

1 Comment

  • >>Proctor & Gamble used its supercomputer to solve a pesky manufacturing problem with Pringles chips. The snack maker had an issue with the design of the popular potato chip — its design caused it to fly off the line during manufacturing, Corrado said.

    "They weren’t aerodynamic enough," he explained. "P&G used a supercomputer to do an aerodynamic simulation and now produces Pringles that are aerodynamically sound and do not fly off of the assembly line."<<

    That’s future, man… I always wanted aerodynamic potatochips!

    I remember hearing one Israeli reporter comparing Israel and Palestine saying – while Israel is making computer chips, Palestine is making potato chips… Yeah? But what if these potato chips are aerodynamic..? Huh?

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

Technewsworld Channels