When we last left off in this series, we were talking about some of the issues facing modern corporations as they look to overcome their business challenges and capitalize on opportunities through the use of computing.
With those issues such as cost, interoperability and energy usage in mind, we can now look to lay out a vision for the future of computing.
The primary goals of today’s sophisticated computing leaders are to manage complexity, reduce costs, and better enable organizations to reap insights from all of the new sources of information. To achieve these goals, both the producers of technology and its consumers need to follow a set of principles.
First, because the work of business is rapidly evolving, computing systems must be deployed that fit the new tasks at hand.
Second, because new technologies are constantly being invented and new sources of data are emerging, it’s important to manage all of that diversity in a holistic manner.
Lastly, because businesses demand speed and convenience, computing services should be delivered in ways that make them easily accessible and consumable.
So it’s necessary to use technology that fits the job, is more manageable, and is easier to use. These are the three core principles for building a more successful enterprise.
This vision holds true whether the data center where the work is done is owned by a corporation or government agency, operated by an outsourcer, or run by a provider of cloud computing services shared by dozens or even hundreds of companies.
Balancing Centralization and Individuation
One theme that all three principals have in common is sharing. In the early days of computing, everything was done centrally in the original data centers, those so-called “glass houses” that were tightly controlled by a small fraternity of highly technical experts. Since then, computing has been on a long march toward democratization, putting more power in the hands of individuals and making technology easier to use.
But along the way, the effort to empower individuals and individual business units led to a situation where each has its own dedicated computing resources. That’s the computing equivalent of suburban sprawl in a city with no zoning laws. It’s too much to manage and too costly.
Organizations need a new approach to computing that makes it possible to employ shared resources while at the same time giving people the information and tools they need to do their jobs wherever they may be and whatever device they choose.
Think of the data center in the future not as a vast array of different types of machines and chunks of software, but instead as a single computing system encompassing processing, memory, storage, networking and all of the software and services that go with it. Conceptually and operationally, it’s one large machine.
In these data centers, it will be possible to fluidly mix and match IT components to fit them to particular types of tasks. We’ll be able to manage the entire system of resources centrally so the components can be shared and used highly efficiently. Ultimately, the system will become more self-conscious, almost like a human brain — understanding its own capabilities, figuring out the best way to complete the specific tasks it has been assigned, and dynamically shifting to respond as demands change or problems occur.
The Flow of Data
This flexibility is especially important because today, no company is an island — and neither are its data centers. Corporations increasingly reach beyond their borders to share data and collaborate with other companies, governments, and universities. They gather information from myriad sources, including sensors of all kinds, mobile devices, and customers. They increasingly blend internal computing activities with services provided by independent outfits. And they participate ever more actively in the world of social networking.
The data center is no longer just managing things the corporation controls. While the computing facility of the future will be more powerful, smarter and more energy efficient than last year’s model, it will in addition be a node in a vast and powerful network of information and computing capabilities that extends far beyond the individual enterprise.
In fact, it’s helpful to think of the data center of the future not as a physical place but as a virtual function. The “glass house” concept has been shattered. The data center will be a coordination system that integrates, monitors and manages all of an organization’s digitized equipment and operations — everything from overseeing servers, telephones, and PCs to managing fleets of vehicles and security systems in buildings. For specific industries, that means managing such things as cell towers, water mains, railway cars and manufacturing gear. It’s a vast universe of instrumented and interconnected equipment, a new, widely distributed business infrastructure.
As companies set out to change the way they use computing, they first have to get their technology houses in order. It’s essential to adopt industry standards because interoperability is essential to being able to continually incorporate new forms of data from new sources, new analysis methods and new devices. They also need to aggressively consolidate data center locations and use virtualization software to get better capacity utilization out of servers and networking and storage devices.
Organizations also have to begin to think differently about security: To deal with the new sources of real-time information, the interconnectedness of institutions, and all the ways people connect with networks, security has to be designed into computers, software, and management systems from the start — not added on later.