Nvidia Shows Dazzling Detail in Next-Gen Project Logan Demo

Nvidia demoed the first processor from Project Logan, its next-generation CUDA-capable mobile processor, at the Siggraph conference and exhibition held in Anaheim, Calif., this week.

The processor uses the efficient processing cores from Nvidia’s Kepler high-performance computing architecture.

CUDA, or Compute Unified Device Architecture, is Nvidia’s parallel computing platform and programming model; combined with Kepler, it effectively lets mobile devices perform tasks that would previously have required PCs, while reducing the amount of battery power needed.

“This is the first time that the same processing core in supercomputers and PCs is also available in a mobile device,” Matt Wuebbling, Nvidia’s mobile business director of product marketing, told TechNewsWorld. “Developers will be able to create content that scales up and down the product stack.”

The processor will “significantly advance” such fields as mobile computer vision, augmented reality and speech recognition, Wuebbling added.

Project Logan has 192 cores, said Jim McGregor, principal analyst at Tirias Research.

“The first thing that comes to mind is, isn’t that like cramming a V8 engine into a Corolla?” McGregor said. “No, because they designed Kepler to be scalable from the ground up with really small cores.”

Project Logan Speculation

Nvidia took Kepler’s efficient processing cores and added a new low-power inter-unit interconnect and extensive new optimizations — both designed specifically for mobile — for the Project Logan processor, Wuebbling said.

This lets the processor use less than one-third the power of graphic processor units in tablets such as the iPad with Retina display for the same rendering, and allows for “enormous” performance and clocking headroom to scale up, he explained.

However, Wuebbling declined to disclose details of the inter-unit interconnect and the optimizations added.

The Project Logan processor supports 4.4, the OpenGL ES 3.0 subset, and Microsoft’s DirectX11 graphics application programming interface.

Support for this multiplicity of standards lets devs use a variety of advanced rendering and simulation techniques that previously could only be used on PCs, such as tessellation, compute-based deferred rendering, advanced anti-aliasing and post-processing algorithms, and physics and simulations, Nvidia said.

CUDA’s Muscles

CUDA 5, which was released last fall, supports dynamic parallelism and GPU-callable libraries and lets developers take full advantage of Nvidia’s GPUs, including accelerators based on the Kepler architecture. It has been well received by defense and aerospace companies.

CUDA architecture debuted back in 2009 under the code name “Fermi.” The Oak Ridge National Laboratory, which is the largest science and energy lab at the United States Department of Energy, began building a new supercomputer based on Fermi.

Project Logan Observations

“The way they designed Project Logan, with a focus on power, efficiency and performance — it’s a great solution,” said Tirias Research’s McGregor, who saw the processor demoed at SIGGRAPH.

Nvidia demoed the chip running on Epic Games’ Unreal Engine 4 graphics software.

Nvidia ran an application on a Project Logan processor side by side with an iPad with Retina display, and “it was drawing about a third of the power the iPad was,” McGregor told TechNewsWorld.

“That means, at that level, they could easily put it into a smartphone, or they could crank the level back up to use the equivalent power tablets use today and get two to three times the performance,” McGregor suggested. “Or they could leave consumption at that level and get two to three times the battery life.”

The Project Logan processor will let mobile devices run “pretty much any application that’s using OpenGL today — image processing, video processing, sensor processing — any type of information you’re trying to take in in massively processing fashion or do parallel processing on,” McGregor commented. “Everyone in mobile today is trying out how to best use this GPU compute type of solution.”

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

More by Richard Adhikari
More in Chips

Technewsworld Channels