Last week, I was at the first GPU developer’s conference put on by Nvidia, along with around 1,500 people trying to change the future of computing. What was both troubling and amazing was the number of times people were telling me stories about things people had said were impossible that they were now doing as a matter of course all because of this change.
Given that the technology industry came to be as a result of people trying and doing the impossible, you would think we would see more of this. Unfortunately, with the possible exception of Apple, few companies have attempted those long shots for much of this decade. I wondered if this event wasn’t heralding in a new golden age of computing.
This week, my product of the week is OnLive, a service that may define doing the impossible in a few short months.
The Aging Technology Market
I’ve been thinking a lot about the market over the last few weeks and remembering when things were much more exciting. It often seemed that we were constantly being surprised by how amazing new hardware could be. In a few short years, we went from computers as monsters only the largest of companies could afford to something we got on our desks to something we could actually buy for ourselves.
Granted, much of it was pretty much butt-ugly, but it did amazing things — and games went from being lots of text and even more imagination to first-person shooters and a lot more fun. Then things seemed to slow down a lot. After Windows 95, Microsoft and the PC companies got more and more focused on corporate customers, and the excitement seemed to drain out of the market.
For much of this decade, we’ve stopped looking forward to getting a new PC from our companies. There just didn’t seem to be that much point in getting a new one; we weren’t really using the performance we had, and the pain of migrating to a new computer simply didn’t justify the marginal benefit we got from it.
We were running out of excitement, and the market desperately needed an infusion because it was getting really dull.
What GPU computing does is shift applications from the CPU to the GPU and from a largely serial process to a massively parallel one. It isn’t easy, but the result of this transition is amazing. I spent most of last week hearing story after story about firms that were using this new development process to do medical, geological and other scientific research that used to take weeks and months — and required booking time on supercomputers — in hours and on their desktops.
This is the kind of advancement that can move an industry or a culture in large steps, because the rate of change is accelerated in a number of critical areas at once. Then, often seemingly suddenly, the combined impact of all the changes results in a gestalt that causes the affected group to perceive the world differently.
It is this kind of thing that can overcome what we often think of as “impossible,” because people who are too ignorant to know they can’t do something end up doing it, and we actually could use more of that kind of ignorance today.
Doing the Impossible
I ran a panel at the event, and we had a number of speakers. Three stood out with personal stories of how their companies were using GPU computing to change the world. The third is my product of the week, but the first two were Adobe and MotionDSP.
Simon Hayhust from Adobe spoke about the massive improvement being able to use the GPU meant for him and his team. You see, for video editing, you typically have to batch the work — and it can take hours or days. People become afraid of taking risks and making mistakes, because mistakes can add weeks to a project and result in missed deadlines.
However, now they can edit in real-time, and the technical staff has to relearn how to be creative and take risks — because now they can. If you can correct mistakes in real-time, there is no reason to be overly cautious. In addition, the cost of the technology is dropping down to the point where, once again, the technology that only large companies could afford can be enjoyed by individuals. This could, and should, result in some great films from storytellers who wouldn’t have been able to bring their ideas to market otherwise.
Sean Varah of MotionDSP told us how his technology was being used to protect lives and property. MotionDSP is used by the military and law enforcement to enhance low-quality videos from a variety of resources, helping to solve crimes and protect soldiers. You’ve likely seen fictional technology on “CSI” that — using movie magic — takes low-quality ATM and Stop signal videos and gets license plate numbers to identify criminals. Well this is what MotionDSP does for real, and this same technology is used to enhance military surveillance videos in time to protect our armed forces at risk. This couldn’t be done without GPU computing.
These were just two examples. The entire site was peppered with example after example of programs and applications that were doing in real-time what often could not be done, because it would take too long or the hardware was unavailable.
Wrapping Up: Massive Change
The last time we had this much excitement in the technology industry was in its infancy, and companies like Apple and Microsoft were trying to get developers excited about the products they were bringing to market.
This, the first GPU developers’ conference, appears to be heralding a resurgence of excitement and innovation for an industry that desperately needs it.
Product of the Week: OnLive – Making The Impossible Possible
Perhaps the most revolutionary product at the GPU conference was OnLive, which potentially could change the face of computing as we know it. The most amazing part of the OnLive story is the number of people I know who believe the platform is impossible, even though it has been vetted by companies like AT&T and Time Warner, which have invested millions in it.
OnLive represents perhaps the most forward-looking of the cloud applications that are attacking — at its very foundation — the belief that high-performance visual computing can’t be done on the Internet.
If you haven’t read about OnLive, it is a service currently in beta that provides high- end games on-demand via the Web. The games are accessible through a very small and inexpensive gaming console or a light application that will run on a low-performance PC.
The promise of the platform is that it gives game developers a more aggressive technology constant than either PCs or game consoles, so they can push the performance envelope on their games and not worry about whether people can afford the hardware.
This isn’t the only promise, though, because once you provide nearly unlimited performance for one thing, you can theoretically provide it for all things. That could revolutionize high-performance computing as we know it, making it available to all of us for a low monthly charge and only when we need it.
This not only would impact what we do and how much we pay for it, it would impact electronic waste and energy use, and result in systems even smaller and more appliance-like than those we have now. For instance, if you think the games on the iPhone are great now, imagine being able to hook that phone up to the power of a next generation high-end gaming computer. The result would be mind-boggling. Because OnLive is doing the impossible, and this column is about doing the impossible, it is a natural for product of the week — even though it is only in beta.
Rob Enderle is a TechNewsWorld columnist and the principal analyst for the Enderle Group, a consultancy that focuses on personal technology products and trends.
that people don’t think things like OnLive will live up to their promise. Two of them specifically –
The first one uses massive blade servers, to provide real time operation of a game world, and I do mean **one world**, unlike most MMORPGs, in which there can be as many as 45,000 simultaneous users. All of the power needed to do this is already being swallowed up in handling user data and processes, such that you "must" have a GPU on the client end, to handle the video. How exactly do you generate this stuff for 45,000+ people, on the server end?
Second Life is even scarier. I that you have custom script execution, possibly for every objects, physics calculations, possibly for every object, and millions of concurrent users, ***but*** you can’t have more than like 10,000 objects in a sim, and maybe 30 people. In theory, you can have as many as 40 people in one, but the cost of running this is so high that it drags performance of all the stuff needed to make it work at all down to almost nothing. This might be closer to what cloud stuff can do, but how do you generate all the calculations for a 100% dynamic system, with user made content, for 40 people (50-60, if they ever got it running like it did before some of the newer updates), using this sort of thing?
Maybe its not impossible, but I would hate to be the guy trying to figure out how to manage it, or looking at the price tag for the number of separate servers needed to provide a smooth system, when single servers, and client end GPU for the video, currently runs fairly badly for things like Second Life, though pretty well for Eve (which has about 50 times less data processing to do, since all it cares about is "where" you are in the "region" you are in, and which region, and some general things like your inventory, skills and cash. The difference between these two types of games is **huge**, and neither of them are World of Warcraft, which doesn’t care to even try to allow you to have more than 40-50 people in a single "region" at one time.
I will admit that, maybe, 60% of the games could do this. But, they are not the cutting edge games at all, and anyone trying to make the next greatest RPG or user content universe, or anything like these, is likely to find themselves relying on what hardware is on the user end, not the server end, since the server end will already be using every scrap of computing power it has, just handling the load of all the stuff happening on it, which doesn’t in all those other games.
But, heh.. If you don’t mind playing those 60% that do work via things like OnLive, while being clueless about why better stuff can’t, or wait 10 years for someone to build GPU style processors for handling this, which are not actually GPUs like a home computer would have in them, then by all means, go with it. I just don’t see how the innovation of the most bleeding edge games are actually met by the requirements of a cloud system that handles everything, including the video. If this was feasible for them, they would be doing it, especially SL, which, frankly, **badly** needs something to improve the situation (but where the problem is 90% server side, with data handling, not on the client end, with the graphics).