IT Leadership

EXPERT ADVICE

Why It Pays to Second-Guess Your Technology Assumptions

As a resident of New Hampshire, I can tell you that the Old Man of the Mountain is a very tender topic for Granite Staters. If you’ve never heard of it, the Old Man is — or rather was — a natural rock formation that was the spitting image of an old man’s face. It was carved out of granite on the slope of Mt. Cannon, and if you’ve never seen it you can check out what it looked like on the back of the N.H. state quarter.

This rock formation was there for a long time — a very, very long time. They don’t know for sure, but scientists figure that glaciers put it there somewhere between 2,000 and 10,000 years ago. And there it stood — relatively unchanged for millennia. While Alexander the Great was doing his thing, while the Roman Empire rose and fell, while humankind invented the printing press, radio, the automobile. Then in 2003, all of a sudden the Old Man came crashing down.

To the casual eye, the structure was as permanent as you could get: It was made of solid granite, it had been up there, stable, since time immemorial. It held steady through everything nature could throw at it. However, even though the Old Man looked solid, those forces working on it that we couldn’t see wore it down. Wind erosion, freezing and thawing, snow and ice, water erosion, gravity — all these forces slowly took their toll. When the “flash point” came, it was catastrophic — and instantaneous.

Now if you’re wondering what the point of this all is, I’m bringing these events up for a very specific reason — which is that, like the Old Man of the Mountain, there are things in our jobs that appear to be stable and secure but really aren’t. We can get caught in an illusion of permanence when something has been a certain way for a long time, when it doesn’t change quickly relative to other things around us. When that happens, sometimes we stop realizing that it can change at all. Just like with the Old Man, if we’re not on the ball, we might not realize until it’s catastrophe time.

However, by thinking ahead — and questioning our assumptions — we can understand when there are subtle forces at work and what could happen as a result of them. Understanding that, we’re able to cut off potential disasters at the pass, or at least plan ahead for a disaster that we know is coming.

Case study: Email

As an example of this type of phenomenon, take a look at how we treat email. There are many firms out there wherein sensitive information gets emailed internally from person to person on a regular basis — hospitals might email patient medical records, universities might email Social Security numbers, retailers might email credit card numbers, etc. In many cases where this occurs, it’s a holdover from earlier days and may not account for the changing ways we use email. Our firms might have made the decision to go with sensitive data internally, and the decision hasn’t been reevaluated.

In some of firms, folks have set up security controls to ensure that the sensitive data stays internal — you might see data leak prevention tools to keep emails containing sensitive data from leaving the internal system, you might see encrypted email to make sure that it’s protected if it does leave, or you might see any number of other protections that integrate into the email system to prevent the data from leaving. Oftentimes, however, there are no restrictions at all — maybe just a statement in policy prohibiting sensitive data in email, or maybe not even that.

The security issues here should be obvious, but we can’t fault folks for wanting to conduct business over email. It’s ubiquitous, stable, reliable. It’s also asynchronous — it lets us ask questions or assign work to folks that aren’t in the office or that might be busy doing something else. Most importantly, everyone already knows how to use it, so, it’s no wonder that our staff would pick it up and want to integrate it into their workflow.

This type of situation is exactly the type of technology that we put in our “blind spot” — that we often fail to reevaluate as often as we should. Why? Because email has been out there a long time. It was one of the first protocols we all encountered when the Internet went mainstream. Email is also slow to evolve; it’s about as static a technology as we’re likely to find in the electronic world, at least from a user experience point of view. Our Outlook/Notes client today looks pretty much like it did in 1999. Just like it’s natural that folks would use it to get their job done, it’s also natural that we would stop questioning how folks use it. Folks use it to send data, and they always have — why question it? What’s changed?

However, when you stop to re-examine the landscape, you find that a lot really has changed. That sensitive data being sent around? When you first made the decision about whether to allow internal distribution via email, maybe it was a green field, but now there are a number of laws and regulations that govern who can access it and where it can go. The email system itself could have changed or be configured differently — maybe you’ve added forwarded addresses in the GAL that really send out to other systems across the Internet instead of being sent internally. If so, do staff know how to tell the difference? Whereas in the past you could count on someone reading their mail from a known client such as Outlook, now maybe they’re checking it from their Android or iPhone. Any of these things — depending on circumstances — could be a whopper of a problem.

Learning How to Second-Guess

However, the point here isn’t to pick on email specifically; it just makes a good example to highlight the type of trap that we all get into when starting to get too comfortable with a given technology. So how do we avoid that trap?

Traditional wisdom suggests periodically re-evaluating everything in our environments to validate its appropriateness — like maybe we review our policy every year and we review our email configuration (to go back to our example) quarterly to make sure that it’s kept pace with the times and changing technology. Will that work? Maybe. However, the challenge in that case is that our review might be overly cursory if we’ve already started to see the technology under investigation as not a source of threat. It’s the same reason why developers don’t QA their own code — they’re so close to it that they just stop seeing the issues.

Instead, what we really need to do is learn how to second-guess decisions that we’ve already made. For example, risk analysis methods like building attack trees to keep a current sense of underlying threats and using data flow diagrams to keep the changing landscape current can work, but doing these well can be a significant undertaking.

If we don’t have the time for that, sometimes a fresh set of eyes on the problem can be the most helpful. Is there someone you trust to spend a few weeks working with you to map out potential issues? Maybe this can be an external consultant or maybe internal staff from another area of the business — if you have an employee-facing infrastructure and a customer-facing infrastructure, try swapping staff from each and mandating them with rethinking the security assumptions others may have overlooked.


Ed Moyle is currently a manager withCTG’s information security solutions practice, providing strategy, consulting and solutions to clients worldwide, as well as a founding partner ofSecurity Curve. His extensive background in computer security includes experience in forensics, application penetration testing, information security audit and secure solutions development.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

Technewsworld Channels