Nobody likes it when people renege on a promise. You don’t have to look very far to see how we as a society view people who lie, break promises or misrepresent themselves. In movies, the bad guy is always dishonest. In books, liars invariably get clobbered: Iago gets tortured, Claggart gets walloped, and Dante puts the liars all the way at the bottom of hell. In fact, even our English word “hell” is derived from the Old Norse “hel” — a place for punishing “oathbreakers and criminals.”
So why all the negativity for the dishonest? In my opinion, it’s hardwired.
Now, I’m no anthropologist, but it seems like common sense that a social species like humans would need honesty to survive. Before there was money for exchanging goods, there was trust. For example, in an early agrarian society, if I say, “If you help me till the field, I’ll share the harvest with you,” what kind of a scum would I be if I let you starve when harvest came instead of keeping my word? It’d be pretty despicable to do that, right? If people didn’t uphold their end, nobody would do anything to help the group, which would be a pretty bleak outcome.
Which brings me to why I’m bringing this up. I’ve noticed a disturbing trend in quite a number of firms, which, while natural for it to occur, is extraordinarily dangerous for the firms in which it happens. Specifically, I’m talking about the tendency for some firms to fail to live up to their own security policy. While these firms may not be deliberately lying — which we all hate so much — the fact that they’re not living up to their stated position is really not a good thing.
Bogus Policy Is No Policy at All
First of all, it’s important to understand why this situation occurs. For those of us who’ve been in security for a while, we’ve probably noticed that there’s more regulation nowadays than there used to be. Most of that regulation requires us to have specific policies in place, use particular security technologies, have particular processes in place, and so on. To address these regulatory requirements, there’s a tremendous amount of pressure on us as security professionals to make a written commitment to take a particular action (for example, creating a policy that says that we’ll review logs on a daily basis). However, when it comes to implementation, maybe there’s not space in the budget or sufficient resources to implement that policy in exactly the way we intended when it was written. Sometimes the people that work for us don’t tell us that they can’t implement that policy or control in the way that we specified.
The end result? We wind up with unimplemented, unenforceable and impractical policies; or we wind up purchasing security tools that stagnate in the inception phase, sitting on a shelf rather than being useful. This can happen to any firm — especially those with the best intentions. After all, the more noble-hearted a firm is, the more likely they are to set a high bar for themselves when it comes to safeguarding information and “doing the right thing.”
How often have you heard someone say one of these things:
- “The policy says we’ll check logs daily, but really we check it once a week because we don’t have time.”
- “The policy says we only are supposed to access mail from systems we own.” (However, there’s a Web mail system and instructions on how to access it from a home computer.)
- “Passwords are supposed to be 8 characters, but the executives can’t remember that many.”
Unfortunately this happens more often than you’d think — and it’s not immediately obvious. After all, if you’re stretched thin already, how much time do you think you’re going to spend revisiting each policy you have out there to make sure that you’re really doing what you said you would? Probably not much. In a shop, for example, where the core competence is not IT, how likely do you think it is that every security control will get implemented in exactly the way that management (or compliance) had in mind when they first allocated funding for it? Not always.
In Fact, It’s Worse
The problem with this, though, is that having a policy that says you’ll be doing something you can’t or that specifies a control that you haven’t really deployed is probably one of the most dangerous things you can do. Based on the information available to us, we (and others) will make decisions based on the documented policy. We might respond to customers that we have a particular control in place and we might share our policy with them. We might make the unimplemented policy available to regulators, auditors, management, etc., with the best intentions, fully unaware that it’s unimplemented. Dangerous.
Imagine, for a moment, that there’s a lawsuit. Your firm is being sued, and you need to defend the actions of your team in court. If the whole firm knows that a particular policy is completely on paper and that firm unofficially sanctions noncompliance with that policy, do you think it’s likely that an observer unfamiliar with you our your employees would conclude that noncompliance was just limited to that one instance? Or would it be reasonable for them to conclude that all of your policies are suspect?
I’m not a lawyer, so take this with a grain of salt, but it seems to me like the unimplemented policy would be a gigantic albatross in that situation. If you need to terminate an employee for noncompliance with policy “A,” do you think the presence of unimplemented policy “B” might forward their wrongful dismissal case?
Alternatively, imagine if you suffered a public breach and it turned out that you had neglected to implement a particular control. Is anybody going to care whether it was just one control that you failed to implement or if it was more far-reaching? No. At the end of the day, all folks will care about is the fact that you failed to live up to what you said you would do.
In fact, under these conditions, it would arguably be better if you didn’t have a policy at all than if you had one and neglected to follow it or (worse) encouraged employees not to follow it. Not having the policy in the first place would be an oversight; deliberately not following it could be construed as negligence.
So, What Happens Now?
Seeing this trend increasing, I think it’s important for firms to revisit the policy that they have in place to make sure it’s being followed. If you can spare them, deploy internal resources to self-assess firm-wide adherence to policy. If internal resources can’t be spared, there are tons of auditors out there who make a living doing exactly this.
When you find an area where policy isn’t met for whatever reason (and you will), your primary goal should be to bring policy and practice into alignment. If policy can’t be followed and it’s limited to a particular area, document the fact formally and note that justifying reason why they can’t follow it. If it’s more widespread (i.e., the whole firm can’t follow a particular policy) consider either: a) making the policy less stringent so that you can follow it; or b) finding some way, technically or otherwise, to get the firm into a position that they can follow it.
Remember, policy shouldn’t represent something you should be “striving for” — instead of setting a high bar with the expectation that organizations will get as close as they can, you should set the bar as low as it takes to make sure that you can get there. Don’t forget that once everybody’s met the lower bar, you can always revisit and set the bar higher at a later time when there aren’t technical barriers standing in your way.
Ed Moyle is currently a manager withCTG’s information security solutions practice, providing strategy, consulting and solutions to clients worldwide, as well as a founding partner ofSecurity Curve. His extensive background in computer security includes experience in forensics, application penetration testing, information security audit and secure solutions development.