Have you seen the “change of plans” commercial? I can’t remember what it’s an advertisement for, but the ad consists of a series of clips of two businesspeople in a series of airports talking to each other via cell phone. Every time they talk, they say things like “change of plans, going to Singapore” or “change of plans, on my way to Australia.” Like post-modern “Carmen Sandiegos,” they “pop” around the world from city to city, never quite knowing where they’ll wind up.
As a technologist and a security professional, what really struck me about this scenario wasn’t their ability to communicate or their crazy schedules — but instead how difficult it would be to support these two from an IT perspective. Think about it: Who knows how many businesspeople are operating in an environment of such tremendous fluidity that we can’t even assume what continent resources will be on from one day to the next.
The schedule is just the tip of the iceberg: In a business that agile, processes are probably constantly in flux and the systems supporting them turning over at a tremendous rate. In that context, how hard would it be to plan weeks, months or years ahead?
Bring On the Metrics
Looking down the road, there’s an issue looming. For most of us in IT, the ideal end-state for our business is agility and fluidity. We want our businesses to be able to respond quickly. We want our professionals to be flexible and able to respond first to new opportunities. However, we also want to be able to plan — because good IT and, in particular, good security, is all about planning.
So, how do we do it? We’ve all heard the platitudes about how metrics are the key to long-term IT planning, but most of us who’ve been in the industry for a while have seen many metrics initiatives fail — or at least not be as successful as we’d like them to be. We have the additional challenge that our metrics not only have to work in today’s climate, but ideally we want them to work in future as well — as our businesses get more and more flexible. How do we meet the challenge and set up something with legs — something that we can use to chart a solid road ahead and not have to keep changing once we start down that road?
Why Some Metrics Fail
In order to understand what makes a good metric, it’s a good idea to first understand what doesn’t work and why. Now, while there’s an infinite array of things that can and do go wrong depending on particular circumstances, some common issues that arise are:
- Information overload. There are all sorts of things that you can keep track of, but only a small percentage of them actually can or will help you in your business. Some teams, recognizing that they might not know ahead of time what variables will turn out to be important, keep track of all the variables and analyze them all equally. The problem with that is that there’s just too much data.
Remember the first day you implemented an IDS (intrusion detection system)? Before you tuned it, the system was spewing hundreds or thousands of alerts and warnings, right? Someone had to go through and decide what was useful and what wasn’t. Same case here: Just collecting and reporting on something because you can doesn’t mean that you necessarily should.
- Probability and “outliers.” If a server is unpatched, it’s more likely to be compromised — but that doesn’t mean that it definitely will be. If a server is patched, it’s less likely to be compromised — but that doesn’t mean it won’t be.
At the end of the day, probability — luck — plays a role. Organizations that expect clockwork precision without accounting for probability in their analysis are likely to find themselves in a pickle.
- It’s not about posterity. The job of IT metrics is not to preserve data for posterity. Having a metrics program is all well and good, but until and unless someone uses those metrics for something productive, the project’s doomed. Collection, reporting and analysis take time. If the end result doesn’t do anybody any good, it’s just unnecessary overhead.
- Context is key. There are all kinds of people — consultants and product vendors alike — who are ready to sell you their view of what your metrics should look like. However, keep in mind that context plays a role. Specifically, the metrics you collect should be reflective of — and useful in — the structure of the security program that you have in place in your organization. Focusing on a subset of controls — for example, stock technical controls that are easy for a product to measure out of the box — might not map cohesively to your program.
Every program is different, so just like we can’t list every possible thing that can go wrong everywhere, not every suggestion will work in every environment. However, generally speaking, there are a few things that can be useful to think about when setting up metrics in your organization, specifically:
- Remember human nature. Employees will attempt to excel in whatever task you set before them, so keep this in mind when setting metrics. For example, if you have an incident response process and keep track of number of reported incidents, bear in mind that if you define success as low numbers of incidents, you might find employees failing to report potential incidents — or “gaming the system” to reclassify incidents as something else. In this case, the “perverse incentive” encourages employees to do the wrong thing — ignore the incident — rather than report it (a perceived failure).
- Don’t run before you can walk. A metrics program in its infancy shouldn’t try to do it all. It’s better to start with a few things that you can measure — and that have value — and build from there rather than biting off more than you can chew. Remember that you can always add more later on, and your ability to collect and analyze data will get more efficient as time goes by. So, as the program develops, you can add more metrics for less cost.
Ed Moyle is currently a manager withCTG’s information security solutions practice, providing strategy, consulting and solutions to clients worldwide, as well as a founding partner of Security Curve. His extensive background in computer security includes experience in forensics, application penetration testing, information security audit and secure solutions development.