Find IT Management solutions and service providers most suitable for your business.
Welcome Guest | Sign In
Content Marketing on ALL EC

Dell Takes the Long View With Hyper Scale Computing

By Charles King E-Commerce Times ECT News Network
Sep 25, 2012 5:00 AM PT

Technologically inclined businesses and other organizations have long enjoyed what I call IT "trickle down": Continuing, rapid development results in the mainstreaming of hardware, software and services that were originally unthinkably expensive and specialized. This doesn't mean that higher-end products ever disappear.

Dell Takes the Long View With Hyper Scale Computing

In fact, the baseline performance of enterprise solutions typically ratchets ever upward. However, the effective result everywhere else is to put what were once radically powerful and unaffordable tools into the hands of most any business.

Supercomputing -- along with associated high-performance and technical computing -- shows how this works. Once these technologies were almost completely relegated to well-financed government and university labs and major enterprise data centers. The continual evolution of Intel's x86 microprocessor architecture, along with complementary clustering, grid and virtualization technologies, made x86 the dominant player in modern supercomputing it is today.

The latest list of the world's best supercomputers, published in June, provides clear evidence of this. The top-rated supercomputer -- the "Sequoia" installation at the DOE's Lawrence Livermore National Laboratory -- is an IBM BlueGene/Q system based on the company's Power architecture, as are three of the list's other top 10 systems. However, five of the top 10 utilize Intel Xeon or AMD Opteron processors. More importantly, out of all the systems on the latest list, nearly 87 percent (434) are x86-based.

Moreover, the users of these systems have changed significantly. In 1993, when began collecting supercomputer statistics, fewer than a third of the systems were being used in industrial settings. Today, more than half of the world's fastest computers are being used by enterprises. Notable changes have also occurred elsewhere, as highly scalable and affordable x86-based technologies have taken supercomputing and HPC deep into the commercial market.

There's Something About Dell

What does any of this have to do with Dell's new C8000 Series? Quite simply, these new solutions are designed to extend the company's already substantial hyper scale computing portfolio into new areas.

Dell launched its Data Center Solutions (DCS) group in 2007 to focus on the emerging commercial hyper scale market, and the company has done very well overall. IDC's analysis of FY 2011 worldwide server sales revenues placed Dell firmly in first place in Density Optimized (IDC's term for hyper scale) system revenues with a 45.2 percent share (HP was a distant 2nd with a mere 15.5 percent). While the segment's FY2011 revenues totaled less than US$2 billion (compared to the worldwide x86 server market's $34.4 billion), IDC said that demand for Density Optimized systems grew by a robust 33.8 percent in FY2011 compared to just 7.7 percent for x86 solutions.

Dell means for the new C8000 Series to expand its leadership position by using highly configurable, flexibly deployable solutions to widen the pool of hyper scale use cases and potential customers. Along with typical HPC and Web 2.0 and hosting applications, the C8000 Series can also support both parallel processing-intensive scientific visualization workloads and the high-volume storage demands of Big Data applications.

Plus, the new systems take full advantage of Dell's innovative work in fresh air cooling, which allows servers to be deployed without costly air conditioning systems or cooling upgrades. Plus, they can be placed in nontraditional settings, including Dell's innovative Modular Data Center infrastructures. That means that Dell's C8000 Series is likely to find fans among a variety of organizations, including new and even smaller companies investigating the hyper-scale market.

The new Dell systems should also pique the interest of longtime HPC and technical computing players. In fact, the Texas Advanced Computing Center (TACC) is an early advocate of the C8000 Series and is basing its upcoming petascale Stampede installation on "several thousand PowerEdge C8000 servers with GPUs to help speed scientific discovery." When it opens for business in 2013, Stampede will qualify as the most powerful system in the National Science Foundation's eXtreme Digital (XD) program with a peak performance of 10 petaflops, 272 terabytes of total memory and 14 petabytes of disk storage.

Far-Sighted Strategy

So how big a deal is Dell's C8000 Series? Some will suggest that the small size of the hyper scale market (at least compared to general purpose server opportunities) makes any effort small potatoes. That may be true in today's dollars but makes less sense looking ahead. Several of the use cases for the C8000 Series -- hosting, Web 2.0 and Big Data, in particular -- are growing rapidly, and interest in commercial HPC and scientific computing applications is also robust.

Given the development of these markets over the past half-decade and the promise of their continuing growth, Dell's 2007 entry into hyper scale solutions looks extremely far-sighted. Given the company's longstanding investments in that effort, its resulting leadership position is hardly a surprise. The new C8000 Series proves Dell is continuing to look forward and developing solutions its customers will need tomorrow but can also use quite handily today.

E-Commerce Times columnist Charles King is principal analyst for Pund-IT, an IT industry consultancy that emphasizes understanding technology and product evolution, and interpreting the effects these changes will have on business customers and the greater IT marketplace. Though Pund-IT provides consulting and other services to technology vendors, the opinions expressed in this commentary are King's alone.

Facebook Twitter LinkedIn Google+ RSS
How do you feel about accidents that occur when self-driving vehicles are being tested?
Self-driving vehicles should be banned -- one death is one too many.
Autonomous vehicles could save thousands of lives -- the tests should continue.
Companies with bad safety records should have to stop testing.
Accidents happen -- we should investigate and learn from them.
The tests are pointless -- most people will never trust software and sensors.
Most injuries and fatalities in self-driving auto tests are due to human error.