Intel's Larrabee Line: Many Cores in Store
Aug 4, 2008 2:12 PM PT
Intel on Monday released technical details about its upcoming line of microprocessors, codenamed "Larrabee" in advance of the company's presentation at the SIGGRAPH (Special Interest Group on Graphics and Interactive Techniques) conference, to be held in Los Angeles next week.
The family of chips will be the springboard for a many-core -- as few as eight and up to 48 initially -- x86 chip architecture. Intel expects the processors to hit the market in 2009 or 2010.
Larrabee will launch an industry-wide effort to design and optimize software for the dozens, hundreds and even thousands of cores expected to power future computers, according to the chipmaker.
"It's revolutionary in that it is different from conventional GPU [graphics processing unit] architectures, but evolutionary in that it uses existing (i.e. x86) technology," said Jon Peddie, president of Jon Peddie Research, told TechNewsWorld.
The technology will offer a "new approach to the software rendering 3-D pipeline, a many-core programming model, and performance analysis for several applications," Intel said.
The architecture behind Larrabee was derived from Intel's Pentium processor. The new chip, however, adds updates such as a wide vector processing unit (VPU), multi-threading, 64-bit extensions and pre-fetching. These enhancements facilitate a considerable increase in available computational power.
The processor's architecture supports four execution threads per core, with separate register sets per thread. This enables the use of a simple, efficient in-order pipeline while retaining many of the latency-hiding benefits of more complex out-of-order pipelines when running highly parallel applications, Intel said.
"We have a lot of people trying to position different processing architectures. You've got Nvidia trying to position the GPU with CUDA (Compute Unified Device Architecture), which is basically a programming structure to use it as an accelerator, as a general purpose CPU (central processing unit) or whatever," Jim McGregor, an analyst at In-Stat, told TechNewsWorld. "Basically what Intel is doing is trying to leverage the x86 architecture in a way that it has not been leveraged before, as really kind of a head-end core to a high-end processing element that can be used as a server accelerator, as a graphics accelerator."
Going After the Competition
While some in the industry may designate "Larrabee" as a graphics processing unit (GPU), according to Peddie, the more accurate term for the chip is graphics capable processing unit (GCPU).
"Larrabee is not a GPU in the sense an ATI, Nvidia or S3 chip is a GPU; it is a gang of x86 cores that can do processing, so it is a GCPU -- graphics capable processing unit, as are ATI, Nvidia and S3's chips," Peddie posted on his firm's blog.
Intel said Larrabee will initially target the personal computer graphics market -- in other words, gaming machines. These early implementations will focus on discrete graphics applications, support DirectX and OpenGL and run existing games and programs. Intel sees Larrabee as more than just a high-end gaming chip, however, and predicts the chip, with its native C/C++ programming model, will also have a place in a broad range of highly parallel applications, including scientific and engineering software.
The chip will be optimal for "any application that can use a SIMD (Single Instruction, Multiple Data) processor -- 3D graphics, scientific computing, etc.," Peddie noted.
The standalone chip will go head-to-head against offerings from Nvidia and AMD's ATI.
"It will be easier for people to program applications for Larrabee. That's the value of Larrabee. They're going after the ultra high-end; stuff that's going to be doing scientific simulations and stuff like that. They're really cranking the power up to see what they can do, and then they'll scale it back to see how it can fit into PCs," In-Stat's McGregor said.
Larabee is not aimed for the "heart of Nvidia's and ATI's market at this point in time. But obviously if they're successful and they create a new computing/programming model around this type of architecture, it does go after that," he explained.