As data center planners seek to improve performance and future-proof their investments, the networking leg on the infrastructure stool can no longer stand apart. Advances such as widespread virtualization, increased modularity, converged infrastructure and cloud computing are all forcing a rethinking of data center design.
And so the old rules of networking need to change because specialized, labor-intensive and homogeneous networking systems need to be be brought into the total modern data center architecture. The increasingly essential role of networking in data center transformation (DCT) needs to stop being a speed bump and instead cut complexity while spurring on adaptability and flexibility.
Networking must be better architected within — and not bolted onto — the DCT future. The networking-inclusive total architecture needs to accomplish the total usage pattern and requirements for both today and tomorrow — and with an emphasis on openness, security, flexibility and sustainability.
To learn more about how networking is changing, and how organizations can better architect networking into their data centers future, BriefingsDirect assembled two executives from HP, Helen Tang, worldwide data center transformation solutions lead; and Jay Mellman, senior director of product marketing in the HP Networking Unit. The discussion is moderated by BriefingsDirect’s Dana Gardner, Principal Analyst at Interarbor Solutions.
Listen to the podcast (34:30 minutes).
Here are some excerpts:
Helen Tang: As we all know, in 2010 most IT organizations are wrestling with the three Cs — reducing cost, reducing complexity, and also tapping the problem of hitting the wall with capacity from a base, space and energy perspective.
The reason it’s happening is because IT is really stuck between two different forces. One is the decades of aging architecture, infrastructure and facilities they have inherited. The other side is that the business is demanding ever faster services and better improvements in their ability to meet requirements.
The confluence of that has really driven IT to … a series of integrated data center projects and technology initiatives that can take them from this old integrated architecture to an architecture that’s suited for tomorrow’s growth.
DCT … includes four things: consolidation, whether it’s infrastructure, facilities or application; virtualization and automation; continuity and sustainability, which address the energy efficiency aspect, as well as business continuity and disaster recovery; and last, but not least, converged infrastructure.
Networking actually plays in all these areas, because it is the connective tissue that enables IT to deliver services to the business. It’s very critical. In the past this market has been largely dominated by perhaps one vendor. That’s led to a challenge for customers, as they address the cost and complexity of this piece.
[With DCT] we’ve seen just tremendous cost reduction across the board. At HP, when we did our own DCT, we were able to save over a billion dollars a year. For some of our other customers, France Telecom for example, it was 22 million euros [US$28 million] in savings over three years — and it just goes on and on, both from an energy cost reduction, as well as the overall IT operational cost reductions.
Jay Mellman: Today’s architecture is very rigid in the networking space. It’s very complex with lots of specialized people and specialized knowledge. It’s very costly and, most importantly, it really doesn’t adapt to change.
The kind of change we see, as customers are able to move virtual machines around, is exactly the kinds of thing we need in networking and don’t have. So there has been a dramatic change in what’s demanded of networking in a data center context.
Within the last couple of years … customers were telling us that there were so many changes happening in their environments, both at the edge of the network, but also in the data center, that they felt like they needed a new approach.
Look at the changes that have happened in the data center just in the last couple of years — the rise of virtualization and being able to actually take advantage of that effectively, the pressures on time to market in alignment with the business, and the increasing risk from security and the increasing need for compliance.
For example, there’s the sheer number of connections, as we went from single large servers to multiple racks of servers, and to multiple virtual machines for services — all of which need connectivity. We have different management constructs between servers, storage, and networking … that have been very difficult to deal with.
Tie all these together, and HP felt this is the right time [for a change]. The other thing is that these are problems that are being raised in the networking space, but they have direct linkage to how you would best solve the problem.
We’ve been in the business for 25 to 30 years, and we are successfully the number two vendor in the industry selling primarily at the edge. … We can now do a better job because we can actually bring the right engineering talent together and solve [networking bottlenecks] in an appropriate way. That balances the networking needs with what we can do with servers, what we can do with storage, with software, with security and with power and cooling, because often times, the solution may be 90 percent networking, but it involves other pieces as well.
There are opportunities where we go from more than 210 different networking components required to serve a certain problem down to two modules. You can kind of see that’s a combination of consolidation, convergence, cost reduction and simplicity, all coming together.
We saw a real requirement from customers to come in and help them create more flexibility, drive risk down, improve time to service and take cost out of the system, so that we are not spending so much on maintenance and operation, and we can put that to more innovation and driving the business forward.
A couple of these key rules drive simplicity. The job of a network admin needs to be made as simple and have as much automation and orchestration as the jobs of sysadmins or SAN admins today.
The second is that we want to align networking more fully with the rest of the infrastructure, so that we can help customers deliver the service they need when they need it, to users in the way that they need it. That alignment is just a new model in the networking space.
Finally, we want to drive open systems, first of all because customers really appreciate that. They want standards and they want to have the ability to negotiate appropriately, and have the vendors compete on features, not on lock-in.
Open standards also allow customers to pick and choose different pieces of the architecture that work for them at different points in time. That allows them, even if they are going to work completely with HP, the flexibility and the feeling that we are not locking them in. What happens when we focus on open systems is that we increase innovation and we drive cost out of the system.
What we see are pressures in the data center, because of virtualization, business pressures, and rigidity, giving us an opportunity to come in with a value proposition that really mirrors what we’ve done for 25 years, which is to think about agility, to think about alignment with the rest of IT, and to think about openness and really bringing that to the networking arena for the first time.
For example, we have a product called “Virtual Connect,” which has a management concept called “Virtual Connect Enterprise Manager.” It allows the networking team and the sever teams to work off the same pool of data. Once the networking team allocates connectivity, the server team can work within that pool, without having to always go back to the networking team and ask for the latest new IP address and new configurations.
HP is really focused on how we bring the power of that orchestration, and the power of what we know about management, to allow these teams to work together without requiring them, in a sense, to speak the same language, when that’s often the most difficult thing that they have to do.
When we look at agility and ability to improve time-to-service, we are often seeing an order of magnitude or even two orders of magnitude [improvement] by churning up a rollout process that might take months — and turning it into hours or days.
With that kind of flexibility, you avoid the silos, not necessarily just in technology, but in the departments, as requests from the server and storage teams to the networking team. So, there are huge improvements there, if we look at automation and risk. I also include security here.
It’s very critical, as part of these, that security be embedded in what we’re doing, and the network is a great agent for that. In terms of the kinds of automation, we can offer single panes of glass to understand the service delivery and very quickly be able to look at not only what’s going on in a silo, but look at actual flows that are happening, so that we can actually reduce the risk associated with delivering the services.
Finally, in terms of cost, we’re seeing — at the networking level specifically — reductions on the order of 30 percent to as high as 65 percent by moving to these new types of architectures and new types of approaches, specifically at the server edge, where we deal with virtualization.
HP has been recognizing that customers are increasingly not being judged on the quality of an individual silo. They’re being judged on their ability to deliver service, do that at a healthy cost point, and do that as the business needs it. That means that we’ve had to take an approach that is much more flexible. It’s under our banner of FlexFabric.
Tang: The traditional silos between servers and storage and networking are finally coming down. Technology has come to an inflection point. We’re able to deliver a single integrated system, where everything can be managed as a whole that delivers incredible simplicity and automation as well as significant reduction in the cost of ownership. …
Mellman: There are quite a few vendors out there who are saying that the future is all about cloud and the future is all about virtualization. That ignores the fact that the lion’s share of what’s in a data center still needs to be kept.
You want an architecture that supports that level of heterogeneity and may support different kinds of architectural precepts, depending on the type of business, the types of applications, and the type of pressures on that particular piece.
What HP has done is try to get a handle on what is that future going to look like without prescribing that it has to be a particular way. We want to understand where these points of heterogeneity will be and what will be able to be delivered by a private cloud, public cloud, or by more traditional methods and bring those together, and then net it down to architectural things that makes sense.
We realize that there will be a high degree of virtualization happening at the server edge, but there will also be a high degree of physical servers for especially some big apps that may not be virtualized for a long time, Oracle, SAP, some of the Microsoft things. Even when they are, they are going to be done with potentially different virtualization technologies.
Even with a product like Virtual Connect, we want to make sure that we are supporting both physical and virtual server capabilities. With our Converged Network Adaptors, we want to support all potential networking connectivity, whether it’s Fibre Channel, iSCSI, Fibre Channel over Ethernet or server and data technology, so that we don’t have to lock customers into a particular point of view.
We recognize that most data centers are going to be fairly heterogeneous for quite a long time. So, the building blocks that we have, built on openness and built on being managed and secure, are designed to be flexible in terms of how a customer wants to architect.
It’s best having the customer just step back and say, “Where is my biggest pain point?” The nice thing with open systems is that you can generally address one of those, try it out, and start on that path. Start with a small workable project and get a good migration path toward full transformation.
Dana Gardner is president and principal analyst at Interarbor Solutions, which tracks trends, delivers forecasts and interprets the competitive landscape of enterprise applications and software infrastructure markets for clients. He also produces BriefingsDirect sponsored podcasts. Follow Dana Gardner on Twitter. Disclosure: HP sponsored this podcast.