Well, it’s November again — which means that it’s just about time for this year’s set of New Year’s predictions. Every year around this time, everyone from antimalware companies to analyst firms line up to tell us about the top IT and security trends — what they are and why we should care. This year, chances are they’ll tell us all about cloud computing, virtualization and social networking and why these technologies are the new best (or worst) friends for security folks in 2010.
Now if you’re sensing a bit of snarkiness here, you’re right — I find these lists a bit frustrating. That’s not because of inaccuracies in the lists themselves (to the contrary, many of them are dead-on), but instead because they sometimes inappropriately drive how IT managers make budgeting decisions. Don’t get me wrong, keeping abreast of the new areas is always valuable — and I’m always fully on board with keeping us and our staff up to date and capable of reacting to new types of threats. But it’s also important to keep in mind that what’s new isn’t always what’s most critical. Where should you be investing budget dollars? At critical areas, not just what’s new and shiny.
Watch Your Fundamentals
To illustrate this, consider a firm that doesn’t use AV (antivirus) and also allows users to access social networking sites. The trend predictions are likely to clue us in about why social networking is something we should care about — but they might not mention malware at all (after all, that’s been around forever.) But if your firm doesn’t yet have a cohesive antimalware strategy … well, you’ve got bigger fish to fry than how, when or what your employees tweet. In other words, when it comes time to allocate budget for new projects, you need to consider both the new and the old — both the upcoming trends that Big Analyst Firm says are emerging, as well as the “tried and true” fundamentals that don’t get as much play.
In the field, it’s all about the basics — when you stop to think about it, how many of us are really where we need to be when it comes to the fundamentals? Which position would you rather defend: that your firm was hacked because of some newly emerging threat, or that you got hacked because you weren’t doing the generally accepted minimum practice?
So in the spirit of keeping one eye on the practical, here’s my New Year’s list — or, more accurately, my “reminder list”: a highly unscientific breakdown of the top five basics that are often overlooked in enterprise. These are things that you probably should be doing, but might not be — and things that you could probably do more with, but maybe aren’t.
1) Vulnerability Assessment
There are several reasons why you might not be doing as much vulnerability assessment as you could be: It can bring down critical systems, it requires some specialized knowledge to vet false positives, it has a high overhead in terms of care and feedback by staff members. As such, there are quite a few organizations that just don’t use it at all — and for companies that do, it’s often inconsistently deployed.
However, aside from this complexity, it’s also one of the most valuable areas of feedback that you can get about how your organization performs. Data about the effectiveness of your patch management processes, your password policy, and your system-hardening procedures are all directly and practically observable through the data coming out of vulnerability assessment results. If you haven’t deployed it yet, the technology is cheap, mature and commoditized.
2) Asset Inventory
How many of us have a detailed inventory of all the “stuff” on our networks? Organizations tend to grow their IT organically, so many of us are in the situation where going back to inventory what we have fielded is a huge, expensive undertaking. Even when we do have some idea of what’s out there, there are very often “gaps” in our understanding of our environments. For example, we may have a pretty clear idea of what desktops are fielded but not have tremendous insight about applications.
If you don’t have a clear idea of what you have fielded in your organization, now is the time to put together that inventory you’ve been putting off. Leverage existing tools like VA reports or business impact analysis documentation to put together a rough “map” of what you have fielded and keep it updated as changes occur. You don’t have to have a fancy system to do this. Start small and grow your inventory the same way you grew your network — organically.
We all know the ideal end state of user provisioning: defined roles that govern access to network resources and applications. But in practice, when the topic of provisioning comes up we wind up going down a path that involves deploying complicated systems or that involves significant effort parsing out users based on vague or poorly defined roles. While we wait for the dust to settle, the day to day business of assigning new users to applications moves ever forward — often with little or no assist from security staff and even less organization.
However, a solid map of roles doesn’t have to be complicated. Start by defining roles at the very highest levels, and get more granular over time. Don’t have a provisioning system deployed? Delegate responsibility for creating roles to subject matter experts who use the application all the time. Check in with them periodically to make sure they’re doing the right thing.
4) Audit and Monitoring
IDS systems are chatty, and the individuals who review alerts are often under significant stress and workload already. At the OS and application levels, staffers are often too overwhelmed to review log and activity reports as much as they should. So who has time to keep up? In many organizations, the day-to-day monitoring of audit and activity logs tends to fall by the wayside — for folks that even have auditing features enabled.
However, compliance mandates like PCI, HIPAA and others specifically require review of audit logs, so failure to make this happen is not an option. Step up what you review and how often you review it. Institute spot-checks to make sure that staffers are doing their jobs when it comes to reviewing this critical data.
5) Business Continuity Planning
Let’s face it, BCP is a lot of work. It involves participation from all areas of the organization — from subject matter experts to business to management. Because of the number of folks involved, very often we don’t have time for formal models, or when we do, very often our analysis goes without updates for long periods of time.
However, planning for contingencies is beyond critical, and all that data you’re getting about applications, systems and business processes can be recycled for other purposes within your security program, such as triage during an incident response exercise, risk analysis, and even asset inventory. So maybe now is a good time to do a refresh on this valuable data — or start doing it if you haven’t already.
Now maybe your company has already hit all these topics, and you don’t need me to remind you to “eat your vegetables.” If so, nice work — count yourself among the well-positioned minority.
However, if in reading through these items you see areas where you could be doing better, remember that boning up on the basics is just as important as looking for new ground to cover. After all, the basics might be “old hat,” but that doesn’t mean they’re not important.
Ed Moyle is currently a manager withCTG’s information security solutions practice, providing strategy, consulting and solutions to clients worldwide, as well as a founding partner ofSecurity Curve. His extensive background in computer security includes experience in forensics, application penetration testing, information security audit and secure solutions development.