Developers

Intel, Microsoft: The Future of Computing Is Parallel

Laying the groundwork for their continued business health, Intel and Microsoft have coughed up US$20 million to fund research into desktop and mobile parallel computing for consumers and businesses.

The money, to be paid out over the next five years, will be split evenly between new research centers at the University of California, Berkeley, and the University of Illinois at Urbana-Champaign.

The software developed by the centers will be given to technology vendors for additional development.

Sowing the Seeds

Microsoft and Intel are looking to the exploit the paradigm shift that will come about as more and more cores are put into processors. “Intel has already shown an 80-core research processor, and we’re quickly moving the computing industry to a many-core world,” said Andrew Chien, vice president, Corporate Technology Group, and director, Intel Research.

The research will create long-term breakthroughs “needed to enable dramatic new applications for the mainstream user,” Chien added. They will add new capabilities such as “rich digital media and visual interfaces, powerful statistical analyses and search, and mobile applications” and will ultimately “bridge the physical world with the virtual.”

Parallel computing sees several instructions carried out simultaneously, and is based on the idea of splitting up a large problem into smaller ones and solving them concurrently.

Parallel computing has been used for many years, mainly in high-performance computing.

Why Go Parallel?

Parallel processing is essential because “every processor architecture is now available in multi-core configurations,” Jim McGregor, research director and principal analyst at In-Stat, told TechNewsWorld. “Programming, however, remains a challenge, which is why there is interest in research and development at the university level.”

New tools, programming models and even a new or retrained generation of engineers will be needed to program in a parallel environment, McGregor said. Starting at the universities “provides both innovation and the training needed for the next generation of engineers.”

The research funded by Intel and Microsoft is being conducted with an eye to the future because “desktop operating systems aren’t ready for multi-core processing, so we’re seeing money being invested in stuff that we don’t have yet but that will now be coming down,” Gartner Fellow Martin Reynolds told TechNewsWorld.

Oh, the Waste

At least half the processing power available in desktops and notebooks goes unused now, Reynolds said. “They just keep adding cores, and the question is, how do you use those cores?” he added.

Office applications like Word and PowerPoint “really don’t benefit from multiple cores” although spreadsheets and databases could; while games “are still not stressing the multi-cores very well” and so Intel and Microsoft are “looking for new applications, new ideas, new things,” Reynolds added.

These would include speech recognition, photograph indexing, indexing movies, cataloging audio and video files and video imaging, and “you may see things like facial recognition security on your computer where it won’t work unless you’re sitting in front of it,” he explained.

New Approaches, New Problems

The advances will come at a cost: Software vendors will have to rewrite their applications, Forrester Principal Analyst John Rymer told TechNewsWorld. Some will require major rewrites while others won’t, he added.

Writing parallel computer programs is more difficult than writing standard, sequential ones because concurrency introduces entirely new potential software bugs. The most common of these are race conditions or race hazards, where there’s a flaw in a system or process in which the output or the result, or both, are unexpectedly and critically dependent on the sequence or timing of other events.

For example, two or more programs may try to simultaneously modify or access a file, corrupting data. Another example is where programs crash because another, unrelated application is using up all available hardware resources.

It’s difficult to cope with race conditions because they are “sometimes really difficult to recreate for testing purposes,” Rymer said. “How are you going to guarantee the integrity of your code when you can’t reproduce this fleeting bug?”

One of the main difficulties parallel software developers face is that the applications are measured both on their ability to parallelize tasks and to perform single-thread tasks, Reynolds said.

“Single-thread is more important in most of the applications we have today, and I don’t think parallel processing is going to have a pass, it’s still going to have to deliver good single-thread performance,” he added.

A Better Solution

Ultimately, there will be a combination of parallel processing and concurrent programming, Rymer said.

“Vendors are really focusing on teaching the world how to do parallel processing, but for a lot of developers in IT shops and doing development for software vendors that’s not practical,” he explained. “There are other techniques generally called concurrent programming, associated with service-oriented architecture (SOA) and modular code, that are much more practical for a general audience and may provide a better path than to parallelize everything.”

Tough Competition

In addition to the corporate funding, the University of Illinois will kick in another $8 million, and UC Berkeley has applied for an additional $7 million in state funds.

The new Universal Parallel Computing Research Centers were selected from among 25 top-tier institutions involved in parallel computing research.

David Patterson, professor of computing science and pioneering expert in computer architecture, will head the UC Berkeley facility, which will be staffed by 14 members of the university’s faculty and 50 doctoral and postdoctoral researchers.

Marc Snir, professor of computer science, and Wen-Mei Hwu, professor of electrical and computer engineering, will jointly head the Illinois facility, which will have 20 other faculty members and 26 graduate students and researchers.

1 Comment

  • Nice article. Yes, the future is parallel. So why are we still using a computing model that was never intended to be a parallel programmming model? Is it any wonder that we are in a crisis? One day soon, the computer industry will realize that, 150 years after Charles Babbage came up with his idea of a general purpose sequential computer, it is time to move on and change to a new model. The industry will be dragged kicking and screaming into the 21st century. Multithreading is not part of the future of computing, wishful thinking on the part of Intel and AMD notwithstanding. To find out why, check out the link below or Google "Nightmare on Core Street":
    http://rebelscience.blogspot.com/2008/03/nightmare-on-core-street.html

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

Technewsworld Channels