There is a not-so-subtle shift taking place in the world of IT, and the culprit is cloud computing. For the past two years, media and vendors alike have proclaimed the cloud to be the next wave of IT — promising to change computing as we know it. The reactions of many IT veterans who have lived through similar marketing barrages in the past have ranged from a cautious wait-and-see attitude to out-and-out skepticism.
We’ve now reached a point where cloud success stories have steadily begun to appear, the initial generation of cloud-enablement tools have been released, and an understanding of appropriate cloud use cases have emerged. The result is that cloud is now becoming not just a feasible option, but in the minds of many, an inevitable option.
Companies are now working to define the role that an infrastructure cloud will play in their IT strategy and actively taking steps to make it a reality. In short, if you were a cloud skeptic, it’s time to dip your toe back in the water.
One of the challenges associated with formulating a cloud strategy is that the term “cloud” applies to a broad range of technologies and deployment models. If you’ve investigated cloud at all, you know about SaaS (Software), PaaS (Platform), IaaS (Infrastructure), and possibly several other “as a Service” options. It’s entirely plausible that some combination of all of these options will co-exist in an organization’s IT infrastructure going forward.
An even more critical consideration for many organizations is the question about location and control — where the cloud should reside and who should control its underlying elements. In other words, the question of public versus private cloud.
While some have already formed an opinion based on perceptions of security and performance, there have been new developments based on real-world usage that make this question worth reconsidering.
Further, as with the various deployment models, there is growing evidence that it makes sense for organizations to consider a dual public-private approach. Following is an examination of some of the factors that might influence such an approach.
The Case for the Public Cloud
To cloud purists, the public cloud IS the cloud, and all other approaches are pretenders. A major driver for attraction to the cloud concept is its inherent elasticity — the ability to expand quickly based on surges in demand, and likewise to easily contract if demand should fall. This means avoiding the traditional angst and up-front capital expense of “if we build it, will they come?” and instead offering a “just in time” approach to IT resource capacity.
With the public cloud, for the first time, this actually becomes a reality. With services like Amazon EC2 and others, IT infrastructure capacity can truly be a “pay by the drink” proposition, avoiding the financial straitjacket of capital expenditure. It addresses one of the fundamental shortcomings of traditional IT — the inability to accurately forecast and overprovision to meet demand.
The private cloud, critics argue, lacks this fundamental attribute. The fact that capital investment in infrastructure equipment must still occur means that much of this promised elasticity of the cloud is lost. They suggest that the private cloud hardly differs from traditional IT and that beyond the consolidation gains of server virtualization, it offers little in the way of significant benefit.
A Case for the Private Cloud
In addition to elasticity, the cloud model promises greater agility and responsiveness, a catalog of defined services, and a clearly stated set of costs based on a resource usage model. Not coincidentally, the cloud offers a list of capabilities that business units have been requesting of IT for years.
A private cloud can provide these features. The reality is that there is nothing that inherently precludes IT from at least beginning to deliver on these capabilities today. There are two primary inhibitors limiting cloud adoption:
- An organizational and political mindset largely derived from siloed technology limitations of the past;
- Legacy applications that are not designed or optimized for the cloud.
Regarding the first point, the same technologies that enable the public cloud — virtualized servers, storage and networking — offer improved efficiencies and service levels in a private environment. This is possible while maintaining the degree of management and data control not available in the public cloud — a capability that organizations may legitimately require.
The necessary technology is already becoming ubiquitous in data centers, and the missing ingredient is the adoption of the organizational and management approach of a cloud service provider. While such a transformation is non-trivial, it is becoming possible thanks to evolving suites of cloud management tools from VMware, Nimbula, NewScale and a slew of others that provide many of the necessary operational and customer management components required to function as a service provider.
The second hurdle relating to applications is somewhat more challenging. To this point, there have been several success stories of organizations that have moved to the public cloud. Most notable among these is Netflix, which is leveraging the Amazon EC2 cloud to deliver its rapidly growing video streaming service.
The move to the public cloud has enhanced both availability and scalability while removing complexity, according to Netflix. However, to realize these benefits, Netflix essentially rewrote its application to adapt to a public cloud environment — a substantial undertaking.
While the Netflix experience offers a glimpse into the possibilities offered by the cloud, it underscores what others have echoed — the public cloud is primarily for green field opportunities where an application is designed to handle an environment with variable latency, disconnects and other traits that may cause traditional applications to fail.
The reality is that many organizations lack both the resources and business justification for rewriting a large number of applications. So, while green field candidates may exist — and it would certainly be wise to plan new application development with the public cloud in mind — a large percentage of IT will not be able to migrate to the public cloud.
However, a significant number of these applications may be suitable for a private cloud. Server virtualization has already shown that running an application in a virtual environment not only improves efficiency, but also enhances resiliency and recoverability. Raising this to a larger scale via a private cloud offers the opportunity to compound these benefits, increase flexibility, and provide greater cost visibility while incorporating a versatile service management capability that can handle larger-scale and more varied deployments.
The Hybrid Option
There is another option for organizations that may not be ready for the public cloud but would like to realize some of its benefits. Recent studies have indicated that it can be cost-effective to leverage the public cloud as an extension of the private cloud in a Virtual Private Cloud (VPC), or hybrid, approach.
Using this model, the public cloud can serve as an addition to a private cloud — within the same management sphere — to provide resource capacity for special projects, seasonal peak loads, and other unplanned rises in demand.
Essentially, the hybrid cloud can be thought of as a data center capacity optimization option. Rather than being grounded in the common practice of building out data center and IT infrastructure capacity to meet anticipated peak demand levels, a hybrid model would be based on building to support a “typical” or “average” level of demand and then “cloudbursting” excess demand via a VPC to a cloud service provider.
This approach would result in a 24 percent reduction in IT spend compared to a legacy IT approach for a typical financial institution, according to a recent McKinsey study. Other environments could see even greater savings. For example, eBay is leveraging a cloudbursting strategy to reduce its data center server count from 2,000 to 800.
Decisions and Start Points
The transition to the cloud is a significant strategic undertaking. While buying and setting up a few virtual servers in a public cloud can be done in minutes with a credit card, designing and implementing an integrated strategy that provides the desired benefits while maintaining manageability and corporate governance is a major undertaking.
This requires an up-front plan that considers the existing and foreseeable application portfolio, including an understanding of application complexity, interdependency and workload variability. For the private cloud, this also requires an infrastructure designed for agility and scalability.
In addition, for any cloud — public, private or hybrid — a service-based mindset must be adopted. Specifically, this means harnessing some essential resources and capabilities: a catalog of standard service offerings; the ability to measure and report — both to customers regarding SLAs and operationally, as needed; the ability to process and deliver service requests in an automated manner; and a usage-based billing capability.
Ultimately, IT becomes a manager — and in many cases, a deliverer — of a portfolio of services evolving beyond its traditional technology-centric role. This isn’t an overnight change. It begins by developing a solid understanding of the cloud landscape and building upon work likely already begun in the area of server virtualization and taking it to a much broader level.
It also requires planning with application and business teams to coordinate future directions. Together, functional teams can then identify the most appropriate cloud options and direction, and then create a plan to realize their goal.
The IT world is changing and the cloud genie is out of the bottle. Isn’t it time to take control?