Major portions of servers now being virtualized, and that has provided an on-ramp to attaining data lifecycle benefits and efficiencies. And at the same time, these advances are helping to manage complex data environments that consist of both physical and virtual systems.
What’s more, the elevation of data to the lifecycle efficiency level is also forcing a rethinking of the culture of data, of who owns data, and when, and who is responsible for managing it in a total lifecycle across all applications and uses.
Here to share insights on where the data availability market is going and how new techniques are being adopted to make the value of data ever greater, we’re joined by John Maxwell, Vice President of Product Management for Data Protection, at Quest Software. [Disclosure: Quest Software is a sponsor of BriefingsDirect podcasts.]
Listen to the podcast (37:20 minutes).
Here are some excerpts:
Dana Gardner: Let’s start at a high level. Why have virtualization and server virtualization become a catalyst to data modernization? Is this an unintended development or is this something that’s a natural evolution?
John Maxwell: I think it’s a natural evolution, and I don’t think it was even intended on the part of the two major hypervisor vendors, VMware and Microsoft with their Hyper-V. As we know, 5 or 10 years ago, virtualization was touted as a means to control IT costs and make better use of servers.
Utilization was in single digits, and with virtualization you could get it much higher. But the rampant success of virtualization impacted storage and the I/O where you store the data.
Upped the Ante
If you look at the announcements that VMware did around vSphere 5, around storage, and the recent launch of Windows Server 2012, Hyper-V, where Microsoft even upped the ante and added support for Fibre Channel with their hypervisor, storage is at the center of the virtualization topic right now.
It brings a lot of opportunities to IT. Now, you can separate some of the choices you make, whether it has to do with the vendors that you choose or the types of storage, network-attached storage (NAS), shared storage and so forth. You can also make the storage a lot more economical with thin disk provisioning, for example.
There are a lot of opportunities out there that are going to allow companies to make better utilization of their storage just as they’ve done with their servers. It’s going to allow them to implement new technologies without necessarily having to go out and buy expensive proprietary hardware.
From our perspective, the richness of what the hypervisor vendors are providing in the form of APIs, new utilities, and things that we can call on and utilize, means there are a lot of really neat things we can do to protect data. Those didn’t exist in a physical environment.
It’s really good news overall. Again, the hypervisor vendors are focusing on storage and so are companies like Quest, when it comes to protecting that data.
Gardner: As we move towards that mixed environment, what is it about data that, at a high level, people need to think differently about? Is there a shift in the concept of data, when we move to virtualization at this level?
Maxwell: First of all, people shouldn’t get too complacent. We’ve seen people load up virtual disks, and one of the areas of focus at Quest, separate from data protection, is in the area of performance monitoring. That’s why we have tools that allow you to drill down and optimize your virtual environment from the virtual disks and how they’re laid out on the physical disks.
And even hypervisor vendors — I’m going to point back to Microsoft with Windows Server 2012 — are doing things to alleviate some of the performance problems people are going to have. At face value, your virtual disk environment looks very simple, but sometimes you don’t set it up or it’s not allocated for optimal performance or even recoverability.
Gardner: It’s coming around to the notion that when you set up your data and storage, you need to think not just for the moment for the application demands, but how that data is going to be utilized, backed up, recovered, and made available. Do you think that there’s a larger mentality that needs to go into data earlier on and by individuals who hadn’t been tasked with that sort of thought before?
See It Both Ways
Maxwell: I can see it both ways. At face value, virtualization makes it really easy to go out and allocate as many disks as you want. Vendors like Quest have put in place solutions that make it so that within a couple of mouse clicks, you can expose your environment, all your virtual machines (VMs) that are out there, and protect them pretty much instantaneously.
From that aspect, I don’t think there needs to be a lot of thought, as there was back in the physical days, of how you had to allocate storage for availability. A lot of it can be taken care of automatically, if you have the right software in place.
That said, a lot of people may have set themselves up, if they haven’t thought of disaster recovery (DR), for example. When I say DR, I also mean failover of VMs and the like, as far as how they could set up an environment where they could ensure availability of mission-critical applications.
For example, you wouldn’t want to put everything, all of your logical volumes, all your virtual volumes, on the same physical disk array. You might want to spread them out, or you might want to have the capabilities of replicating between different hypervisor, physical servers, or arrays.
Gardner: I understand that you’ve conducted a survey to try to find out more about where the market is going and what the perceptions are in the market. Perhaps you could tell us a bit about the survey and some of the major findings.
Maxwell: One of the findings that I find most striking, since I have been following this for the past decade, is that our survey showed that 70 percent of organizations now consider at least 50 percent of their data mission critical.
That may sound ambiguous at first, because what is mission critical? But from the context of recoverability, that generally means data that has to be recovered in less than an hour and/or has to be recovered within an hour from a recovery-point perspective.
This means that if I have a database, I can’t go back 24 hours. The least amount of time that I can go back is within an hour of losing data, and in some cases, you can’t go back even a second. But it really gets into that window.
I remember in the days of the mainframe, you’d say, “Well, it will take all day to restore this data, because you have tens or hundreds of tapes to do it.” Today, people expect everything to be back in minutes or seconds.
Terms Are Synonymous
The other thing that’s interesting is that data protection and the term backup are synonymous. It’s funny. We always talk about backup, but we don’t necessarily talk about recovery. Something that really stands out now from the survey is that recovery or recoverability has become a concern.
Gardner: We seem to have these large shifts in the market, one around virtualization of servers and storage and the implications of first mixed, and then perhaps a majority, or vast majority, of virtualized environments.
The second shift is the heightened requirements around higher levels of mission-critical allocation or designation for the data and then the need for much greater speed in recovering it.
Let’s unpack that a little bit. How do these fit together? What’s the relationship between moving towards higher levels of virtualization and being able to perhaps deliver on these requirements, and maybe even doing it with some economic benefit?
Maxwell: You have to look at a concept that we call tiered recovery. That’s driven by the importance now of replication in addition to traditional backup, and new technology such as continuous data protection and snapshots.
That gets to what I was mentioning earlier. Data protection and backup are synonymous, but it’s a generic term. A company has to look at which policies or which solutions to put in place to address the criticality of data, but then there is a cost associated with it.
For example, it’s really easy to say, “I’m going to mirror 100 percent of my data,” or “I’m going to do synchronous replication of my data,” but that would be very expensive from a cost perspective. In fact, it would probably be just about unattainable for most IT organizations.
Categorize your data
What you have to do is understand and categorize your data, and that’s one of the focuses of Quest. We’re introducing something this year called NetVault Extended Architecture (NetVault XA), which will allow you to protect your data based on policies, based on the importance of that data, and apply the correct solution, whether it’s replication, continuous data protection, traditional backup, snapshots, or a combination.
You can’t just do this blindly. You have got to understand what your data is. IT has to understand the business, and what’s critical, and choose the right solution for it.
Gardner: It’s interesting to me that if we’re looking at data and trying to present policies on it, based on its importance, these policies are going to be probably dynamic and perhaps the requirements for the data will be shifting as well. This gets to that area I mentioned earlier about the culture around data, thinking about it differently, perhaps changing who is responsible and how.
So when we move to this level of meeting our requirements that are increasing, dealing in the virtualization arena, when we need to now think of data in perhaps that dynamic fluid sense of importance and then applying fit-for-purpose levels of support, backup, recoverability, and so forth, whose job is that? How does that impact how the culture of data has been and maybe give us some hints of what it should be?
Maxwell: You’ve pointed out something very interesting, especially in the area of virtualization, just as we have noticed over the seven years of our vRanger product, which invented the backup market for virtualized environments.
It used to be, and it still is in some cases, that the virtual environment was protected by the person, usually the sys admin, who was responsible for, in the case of VMware, the ESXi hypervisors. They may not necessarily have been aligned with the storage management team within IT that was responsible for all storage and more traditional backups.
What we see now are the traditional people who were responsible for physical storage taking over the responsibility of virtual storage. So it’s not this thing that’s sitting over on the side and someone else does it. As I said earlier, virtualization is now such a large part of all the data, that now it’s moving from being a niche to something that’s mainstream. Those people now are going to put more discipline on the virtual data, just as they did the physical.