Last week, I talked about the cell processor expected from Sony and IBM [Paul Murphy, “Fast, Faster and IBM’s PlayStation 3 Processor,” LinuxInsider, June 17, 2004]. This week I want to think out loud about what happens in the industry if Toshiba launches a PC based on this processor into the Asian market and IBM promptly follows suit with a series aimed at the American and European markets. Such a machine would run Linux, be compatible with most Linux software, and come with a subscription license to a suite of IBM software built around Lotus Workspace.
The base model would come with one processor assembly consisting of a single PowerPC-derived core with eight attached processors to deliver, at 4 GHz, the potential for roughly 10 times the floating point performance of a Pentium 4 Extreme at 3.4 GHz and equivalent or better processing. More of that potential floating point performance would be useful than you might expect, despite the change in software because graphics and network-packet management would be handled by the primary processor and its attached processor array.
Obviously that approach simplifies graphics and packet handling while lowering manufacturing costs relative to more traditional designs in which these tasks are handled on add-in boards with their own processors and memory. More subtly, however, it also transfers more of the development burden to Sony- or IBM-developed interface management libraries that can be highly optimized to make effective use of the array architecture.
Later models should come with two-, four- or eight-way on-chip CPU assemblies that can be stacked to form extremely powerful supercomputers on the grid model. Assuming that history does not repeat itself via an IBM decision to shoot itself in both feet to protect the status quo — as it did with the Future System in 1972 — we can assume that machines like this are coming. The real question isn’t whether this will happen, but when and with what impact on the market?
Predicting the Transformation
I’d like to make two outrageous predictions on this: first that it will happen early next year, and secondly that the Linux developer community will, virtually en masse, abandon the x86 in favor of the new machine.
The key reason for the new machine’s rapid acceptance is that it will be a far better PC than its Wintel competitor. Not only is this product likely to be faster, less expensive and more scalable than Wintel, but the critical factor is that the processor incorporates key RISC design ideas and thus inherits none of the security and bandwidth limitations of the x86 architecture.
From the technical perspective, furthermore, the cellular dispatch programming model offers Linux developers real benefits that go beyond simply being different or more secure. PC hacks, for example, should run unchanged on massive parallel supercomputers while desktop PCs will function as at least equal members in network games aimed at the PlayStation market.
Taken together, these factors should make it a must-have machine both for developers and users. If that happens, it should be obvious that IBM’s success will drive tremendous gains in Linux desktop market share, but what’s less obvious is that it will also kill further development of Linux on the x86.
That will happen mainly because compatibility will be a one-way street with software from the x86 world moving easily to the cell environment — but the x86 unable to run software written either to the new graphics model or to the grid approach itself.
Cell is a technical winner but, more importantly, it can also be a strategic winner for IBM. IBM has been a bit player in the x86 world essentially since its introduction, losing money on nearly every Intel product it has sold since the late eighties and being essentially forced out of the desktop OS and applications markets by Microsoft.
As a result, today’s Wintel PC ecosystem bypasses IBM. The PC is fundamentally designed by Intel, made by contractors in places like Taiwan and South Korea, brought to functionality through Microsoft software, and sold into the English-speaking markets predominantly by Dell and HP.
The Wintel oligopoly looks strong, but is extremely vulnerable because of Intel’s failure to deliver on Itanium. Without a replacement for the x86, Wintel has no place to go but down — and neither IBM nor Sun, the two companies with technically successful CPU strategies in the works, have anything to lose by accelerating that process.
From IBM’s perspective, remember, just breaking the Wintel oligopoly wouldn’t be a good idea if the new device cannibalized its own existing markets or created too great an imbalance with existing products. That’s what’s happening to Apple, for example, where IBM promised Steve Jobs a 3-GHz G5 by mid 2004 but won’t deliver because doing so would create a problem for its own rollout of much slower PowerPC-based gear to its customers.
Programming Paradigm Shift
In the case of the cell-processor-based PC, however, IBM isn’t going after its own markets. It’s going after Wintel’s. Equally importantly, the requirement that developers adopt a new programming paradigm for it acts as a firewall that will protect IBM’s traditional markets for years after the introduction of the new machine.
As a result, IBM has much to gain and very little to lose by challenging for desktop PC supremacy.
The Linux community is in a similar position with respect to x86-based products. The x86 was adopted mainly because it was inexpensive and widely used, not because it has ever been anything better than third rate.
The new IBM desktop, in contrast, will also be inexpensive and widely used, but at the top of its class in terms of design and performance with none of the security issues that plague the x86. It would be astonishing, therefore, if the majority of noncommercial developers don’t massively move to it — leaving Lintel, or Linux on Intel, the technology equivalent of a dead man walking.
See “Fast, Faster and IBM’s PlayStation 3 Processor” and “Grid vs. SMP: The Empire Tries Again” for additional coverage on this topic by Paul Murphy…
Paul Murphy, a LinuxInsider columnist, wrote and published The Unix Guide to Defenestration. Murphy is a 20-year veteran of the IT consulting industry, specializing in Unix and Unix-related management issues.
Your article has good points but the obsolete x86 achitecture argument was used in 1995, when AIM goes for a much better RISK design (PowerPC), but failed to deliver a much promised better performance.
The number you show in performance are very impressive but will be real?. I hope so (after all I’m writing this in a PowerMac with Linux 🙂 ), but time will tell.