There’s a new bit of conventional wisdom regarding the future of computing, and it’s gathering momentum in some corners of the business world. Advocates of this particular piece of wisdom argue that the best way to reduce the cost and complexity of corporate computing for the data centers of the future is to adopt a single microprocessor design, Intel’s x86, and two computer operating systems, Microsoft’s Windows and the Linux open source program.
A corollary to this idea is that corporations should farm out all of their computing tasks to cloud service providers who operate vast data centers equipped with tens of thousands of those commodity-type PC servers.
This “wisdom” is wrong.
One of the core strengths of the computing industry over the years has been its ability to accommodate a tremendous diversity of new ideas and technologies. For better or for worse, the computing industry is driven by dreams of invention and entrepreneurship — of coming up with better ideas and better solutions for the world’s problems.
These impulses have spawned generations of technological advances that range from the early mainframes and mini-computers to the PC, the Internet and social networking. And we can be confident that entrepreneurship and innovation will continue to drive many more advances in the decades to come.
With this in mind, it makes no sense to try to freeze progress and declare that the world is going to need just one kind of computer from now on.
Public and Private
At a time when some companies are touting the virtues of computing using commodity-style servers, it may be surprising to learn that embracing diversity can actually be less expensive. While it may seem counterintuitive, companies can actually save money (on both acquiring servers and powering them) by building a data center that employs a centrally managed combination of mainframe and blade servers, if they use the right approach.
Similarly, and much like the argument in favor of commodity computing, the prevailing idea that all corporate computing will shift entirely to the public cloud model ignores some fundamental facts. While in many cases the public cloud is a benefit for companies and end-users alike, there remain to be several common scenarios in which a public cloud simply isn’t a reasonable option.
For starters, many companies will want to retain control over their most critical computing tasks. While this may seem like an antiquated notion to some, this idea of ownership over critical tasks is not one that is likely to fade anytime in the near future.
In other very common scenarios, regulations will prevent the co-mingling in public clouds of data which may contain sensitive personal information.
As corporate cloud computing continues to grow and take shape, we can all expect more regulations governing how we store and access certain types or categories of data. In fact, it’s likely that a decade from now, a majority of the larger enterprise operations will continue to handle more than half of their computing tasks internally, rather than farming them out to cloud service providers.
That’s not to say that cloud computing is will wane in popularity, as it’s fairly obvious that is hardly the case. Rather, companies will continue to be judicious about the sensitivities of their computing operations and the data they handle.
In conclusion, PC servers are, and will continue to be for the foreseeable future, an important ingredient in corporate and enterprise computing. However, on the flip side, as we look forward, public clouds will be the right solution for many applications at many companies. But while there’s merit in both, neither one — nor both together — is sufficient.