A Strategy for Post-Virtualization Security
As the second law of thermodynamics tells us, all things trend toward chaos and this is no less true with a virtual environment. Sprawl can have a real security impact, and it takes discipline and planning to control sprawl -- discipline and planning that won't occur without someone from the security team actively monitoring the problem and formulating strategies for how to address the issue.
01/24/12 5:00 AM PT
Virtualization has been one of the most rapidly and widely adopted technologies in recent memory. It's huge, and it's here to stay.
And as security professionals know, setting up a virtual environment securely isn't easy. Significant effort goes into tasks like evaluating off-premise service providers, ensuring regulatory compliance, and standing up technical controls like monitoring and encryption. But in the excitement to stand up the new environment and get security to an acceptable "target state," organizations sometimes don't address security hygiene long-term. In other words, security is in high gear while the environment spins up, but it doesn't lay the groundwork for what happens once things are chugging along.
This represents an area of concern because setting up a secure environment is only the first step. As the second law of thermodynamics tells us, all things trend toward chaos -- this is no less true with a virtual environment. Stand up a virtual environment today and walk away from it and you'll wind up with an unmanaged security nightmare tomorrow. As technical staff create new VMs, modify existing VMs, create "orphan" snapshots, and take other action in the environment, the environment slowly moves away from the defined, "secure" state into a less known one. This has a security impact.
So for organizations laying out challenges and focus areas for the new year, now's a good time to think through your planning for how to keep the virtual environment secured. Ideally, you've been doing this all along as you first started tackling virtualization. But if (like most) you haven't, doing it now is a smart move.
For virtualized data centers and private cloud deployments, keeping the number of virtual hosts within defined parameters can be challenging. VMs tend to proliferate and collect over time because of "one-off" or ad hoc VMs created without a clear plan for decommissioning. Add time and employee attrition to the mix, and you can be left with a large population of undocumented VMs that lack clear purpose and that staff are uncomfortable removing because they're not sure who will be impacted.
On the operations and performance side, sprawl is a well-known problem. But the security side of it isn't always addressed. For example, sprawl can have a regulatory impact. The PCI virtualization guidance tells us that if a VM is in scope of PCI, so also is the hypervisor. This means that uncontrolled proliferation can have unintended consequences -- like if a test and QA VM moves into the CDE without appropriate controls. Even without the regulatory impact, dangers abound, such as technical security considerations like patch management, logging, anti-malware, etc.
It takes discipline and planning to control sprawl -- discipline and planning that won't occur without someone from the security team actively monitoring the problem and formulating strategies for how to address the issue. The further environments drift from the documented secure state, the more work is required to bring the environment back in line. This means that security organizations should be actively monitoring inventories of VM assets. They should be working with the technical teams to control expansion now while the problem is small rather than waiting for the problem to become unmanageable later on down the road.
Impacts to Existing Controls
Secondly, as we all probably realize by now, existing security controls don't always translate well to virtual environments. Consider as one example what happens to network traffic monitoring tools like IDS when conversations between virtual images happen within the hypervisor (backplane communications) as opposed to over the network. For most security professionals, this failure to translate means they've needed to deploy new security tools to address shortcomings in the existing tool set.
This strategy is great in that it meets immediate needs, but it doesn't address what happens over the long term. Consider that the virtual environment is expanding (in most cases rapidly) while the legacy physical environment is contracting. The budget for controls is usually constant. Consider how these two data points play out a year from now -- and two years from now. Budgetary support is likely to shift. In fact, it's not hard to envision a scenario wherein we need to start scaling back existing security controls that address only the legacy environment. Doing that cleanly takes planning and preparation on the part of the security organization. For example, it may take a year or more to shift how current controls are managed and operated in order to allow them to spin down cleanly without impacting existing staff.
As the last item to consider, security organizations often don't appreciate the tremendous rate of data growth that can occur in a virtual environment. Pretty much everything you do in the virtual environment has a storage impact. There's the data you'd be collecting and managing anyway, but also sprawl adds to it, planned growth, snapshots used for QA or patching, etc. Data volume can explode in a very short period of time.
This matters to security professionals because of the way certain controls operate relative to data volume -- specifically, controls that operate linearly over data like DLP file searches and encryption.
As an example of why this matters, take the example of bulk data encryption. Encrypting a terabyte is trivial. Encrypting an exabyte? Well, that may not even be possible depending on how the data is used. There are some controls that need to be addressed while data sizes are still manageable. It behooves security professionals to think this through now rather than waiting until the volume has expanded beyond what a given control can manage.
Spending some time thinking through the ongoing maintenance and hygiene of a virtual environment is a useful exercise. So in laying out plans for 2012, keep in mind that the work doesn't stop once the environment is in place -- it continues throughout the entire lifecycle of that environment.