Did you know everyone has a blind spot? It’s true.
I don’t mean the blind spot you get behind you when you’re out on the road driving — where you can’t see a passing car in your rear-view mirror. Instead, I’m talking about something that’s an aspect of human physiology: the “anatomical blind spot” (punctum caecum) — a place inside your eye where the optic nerve runs through the clusters of sensory cells that detect light.
Because the optic nerve sits right on the space where the light detection cells are (and takes up space where those cells otherwise would be), there’s a spot right in the middle of your field of vision where your eye can’t detect anything at all (if you’re in the mood for a bit of experimentation, check out Wikipedia’s test for the visual blind spot).
It’s amazing: We go about our day to day lives not noticing that there’s a huge empty void right in the heart of our field of vision. Our brain “glosses over” the missing data so that we’re not always tripping over the hole in our vision. It’s only after interpretive processing (the way the brain handles the missing data) that the deficit in the instrument gets “smoothed over.” In other words, two organs need to work in tandem for vision to work — the power of the mind to interpret turns an imperfect instrument (the eye) into an exceptional one.
It’s probably clear to you where I’m going with this. In information security, it’s all about detection capability. We’re used to constantly trying to refine our ability to detect things via antivirus, intrusion detection, vulnerability scanners, antispyware, and so on. We go through endless cycles of product evolution trying to get to as close to perfect detection as we can. We’re used to vendors, peers and regulators pointing out our “blind spots” and trying to fill them in. For example, one vendor might criticize another for not catching a particular type of attack in their product; another enterprise might decide to replace one antivirus package with another on the basis of better scan accuracy. We’re obsessed.
However, how often do we show the same level of interest in setting up technologies and processes designed to make better sense of the data we already have? At the end of the day, the detection instrument is only half the problem; we need a way to take the data we have and refine it — fill in the missing pieces and interpret what we already have to get to a more complete picture.
Step 1: Create a Channel
So as an IT organization, where do we start? Most of us probably have quite a few “detection” instruments in our environments right now — things like IDS (intrusion detection system), vulnerability scanners, and antivirus. But for many of us, those systems are “siloed” within the propeller-head crowd — meaning they don’t have much in the way of visibility beyond the one or two individuals who actually administer them. That’s detection, but certainly not perception.
To get to a broader awareness, we want to make sure that the instrumentation we already have is actually used for something and has an audience. Take the output from each detection tool and make someone (for example the system administrator) accountable for reporting on a defined basis (for the sake of argument, let’s say weekly) the activity of the system. For example, an AV system admin might provide high-level operational metrics like percentage of devices covered by the scanning tool and most frequently detected malware. An IDS system administrator might report the number of issues encountered per hour, total number of issues, and most frequently reported IP.
Start small. The goal is not to make life difficult for your admin staff with tremendous reporting requirements. Instead, what you’re doing is building a channel. You can refine the data later once you have a better idea of what you’re interested in. At the early stages, the goal is just to take the information you already have and build a pipeline so it’s going somewhere.
Step 2: Tie Channels Together
After tasking system administrators with reporting, goal number two should be to build a framework to review the data/metrics coming in and establish a process for correlating it with all the other feeds. Keep in mind that it doesn’t have to be perfect out of the gate. It’s better to get a rough framework up and running — even if its’ not as useful as you might ultimately want — than to try to boil the ocean and not get anywhere.
As to how to tie it together, the process is the important part, rather than the specifics mechanics of how you do it. Go with whatever methodology is most conducive in your environment: If your culture favors “dashboards,” go with that and present a console-view of what you’re gathering. If your firm is into status meetings, task someone with collecting the data and presenting it there. The point here is to centralize the data — to take the various data streams and tie them all together somewhere. Once you have them all tied together, you can start to look at what you have (vs. what you’re missing) and how you can better refine what’s coming in to give you more/better data about what you’re interested in.
Don’t be shy about letting individual administrators innovate about what data they provide and how they provide it. If you make clear what you’re trying to do, your staff will help refine the individual pieces, and ultimately the system will gravitate toward something more useful.
Step 3: Keep Records
As you build the data collection channel, it’s useful to keep historical data. Not only does this data help you to establish “baseline” activity (so you can tell what’s “normal” activity vs. what isn’t), but it’s also helpful to in that you can keep track of high-level shifts that could have an impact on the security of your particular environment. Noticing a swing away from AV usage among development hosts? Can that be correlated to something else going on — like increased activity on the software install tickets for the new software development package they’re using?
Having data like that allows you to draw inferences — like maybe the new software makes it so the AV software doesn’t run appropriately (or needs to be disabled so the developers can do their jobs.)
The point is that you’ll never know unless you start looking.
Ed Moyle is currently a manager withCTG’s information security solutions practice, providing strategy, consulting and solutions to clients worldwide, as well as a founding partner ofSecurity Curve. His extensive background in computer security includes experience in forensics, application penetration testing, information security audit and secure solutions development.