A lot of folks have been making a big deal the past few days about Google employee David Barksdale. If you haven’t caught the coverage, the fuss is centered around this one employee — a mid-twenties “site reliability engineer” — who (allegedly) inappropriately used his position of authority and corresponding elevated levels of access and privilege to view the private data of a number of individuals. The fact that the data included details of a few individuals who were minors — well, that wasn’t good. Anyway, this thing is turning into quite the brouhaha.
Google fired the employee, but as you can imagine, the lid was already off a can of worms. According to what’s been made public, the actions he took weren’t anything particularly shady: He was mostly just showing off. Still, the whole situation leaves a bad taste in people’s mouths, because Google controls vast, vast quantities of data about us — from our consolidated medical histories to our location, to our archived email and voicemail.
We’ve put Google — and by extension, Google’s employees — in a position of tremendous power and responsibility. Intuitively, we all know that should Google abuse that power, it could cause us all manner of harm. So a situation like this one — a rogue employee accessing records willy-nilly — brings the problem home in a very powerful and impactful way.
This incident has actually caused some enterprises to question the risk/reward benefits of the cloud model — a discussion that had (finally) been settling down after years of industry-wide debate and discussion.
A Good Thing
However, it’s important to realize that this whole scenario is actually a good thing. That doesn’t sound intuitive, but it is. Here’s the hard truth: Employees inappropriately access data. All the time. Chances are somewhere in your organization, some employee is inappropriately accessing your data right now. In a hospital, they’ll look up how their neighbor’s surgery went; at a phone company, they’ll look to see who their spouse is calling; and at an insurance company, they’ll look to see which pop singers have better driving records. Whether well-intentioned, curious, or nefarious — it’s human nature for them to want to take a quick snoop.
The difference between Google and most of us is that Google recognizes this and knows when it’s happening.
This incident tells us three things about Google and how it’s handling our data: 1) it proactively audits access to our data; 2) it’s ready and willing to take direct action when it finds something amiss as a result of that auditing; and 3) it’s willing to acknowledge a situation of this type — and the chain of events leading up to it — publicly.
In a way, that’s good news. It’s like going to the doctor and finding out that you have high cholesterol. Sure, it’s not great that your cholesterol is high — but it wasn’t any lower before you got it tested. Finding out that it’s high means there’s an opportunity to lower it before it starts to cause damage — or, if it has already started to cause damage, you can take steps to remedy that damage before it leads to a more dangerous situation.
Of course, not every cloud vendor is going to manage this in the same way. It’s important to recognize that some vendors might not have the proactive auditing in place at all. Others might audit and not take action on what they find (“Shame on you for downloading those gigabytes of medical records. Now go get back to work.”) Lastly, some might audit, take reasonable action based on their auditing, except not be in a hurry to admit it (“Umm? your support rep has elected to move on to other opportunities. In a tremendous hurry and with no advance notice.”)
Generally, if you are getting some visibility into events like these, that’s a positive sign. If you get no feedback at all — particularly from a vendor that is managing a large volume of sensitive data on your behalf — you might want to start asking whether all of its employees are preternaturally well behaved (unlikely) or whether this vendor is falling down on auditing access to records. After all, its employees are probably no different from yours — so somebody should be watching them just as carefully.
What You Can Do About It
Of course, all of these operational aspects of a cloud vendor’s security aren’t things you’re likely to see on a sales brochure. Given that business issues tend to drive vendor selection more than operations, security is often something that isn’t part of the purchasing decision. So, if you’re in the position of wanting to make sure that your vendor is doing the right thing and that it’s upholding the same (or better) security standards than you would internally, what are some ways to do that?
First and foremost, find out what your vendor is doing from an audit standpoint. That way you have some idea of the problem space; if it tells you it isn’t doing any auditing — well, at least you know. You can notify the business owner of the potential risks and give it a real-numbers scenario should a breach occur; you can look for strategies to limit the quantity or type of data it interacts with, etc. The point is, knowing for a fact that it’s doing nothing gives you options.
If it tells you that it is proactively auditing access to your data, it’s not unreasonable for you to ask for evidence of what it found. If it’s contractually obligated to audit, if it specifically committed to doing so, or if it answers “yes” to your specific question on the topic — then it should be equipped to give you at least some assurance level that it’s keeping to it. Ask to see sanitized disciplinary records for personnel who have been identified acting in an unauthorized fashion. If your vendor is holding gigabytes of your records over a long-enough period of time, it’s reasonable to expect that it has found something out of line and taken some kind of action.
If it hasn’t, or if it otherwise can’t show it to you? Well? That’s a warning sign. After all, human nature is what it is, and folks are going to try to look at the records. It’s the degree to which organizations identify the behavior, plan for it, and respond that separates the exemplary from the rest of the pack.
Ed Moyle is a manager withCTG’s information security solutions practice, providing strategy, consulting and solutions to clients worldwide, as well as a founding partner ofSecurity Curve. His extensive background in computer security includes experience in forensics, application penetration testing, information security audit and secure solutions development.