Are Security Vendors Living in Glass Houses?
Keeping clients' systems from being breached is a security vendor's reason for being. But what happens when the vendor itself is breached? Whether to disclose that a system has been compromised can be a difficult question to answer, depending on the circumstances and what the infiltrator plans to do with the stolen information.
Feb 28, 2012 5:00 AM PT
What would happen if you paid taxes or protection money but didn't get protected because your protectors themselves were getting clobbered? Worse still, what if they didn't tell you they had been compromised and that you might not be safe?
That situation played out recently after yet another company suffered a system breach and kept largely silent on the matter.
That company was VeriSign. It's a certificate authority, meaning it's one of the issuers of the digital certificates that lie at the heart of our digital lives.
VeriSign's corporate network was breached several times in 2010, but the company didn't utter a peep. It was only earlier this month that the news emerged, and that only when the company filed its 10-Q form with the Securities and Exchange Commission (SEC).
Asked about the news of the VeriSign hack, Symantec, which purchased VeriSign's security business in 2010, declined to elaborate. The company didn't have much to add to the story, spokesperson Mike Bradshaw told TechNewsWorld.
The Silence of the Lambs
Secrecy on the part of security vendors after they've been hacked is nothing new.
In January, news that hackers had hit Symantec and stolen the source code for its PcAnywhere remote access software made headlines when the hackers posted the code to the Web.
Symantec's response was to tell customers to observe best security practices and warn them that, in the worst case, they might have to disable the application.
However, that code was stolen back in 2006, according to Symantec.
Run Silent, Run Deep
Symantec, VeriSign and McAfee are among the largest security vendors in the industry, meaning their products are widely used both among consumers and government groups.
"Silence concerning any data breach can and does put customers or end users at risk," Jerry Irvine, CIO of Prescient Solutions and a member of the National Cyber Security Task Force, told TechNewsWorld. So does silence or lack of notification about any application vulnerability.
Keeping silent about vulnerabilities is not a good idea because "security through obscurity provides no security at all, and the more time that passes, the more likely it is that someone somewhere may have discovered an exploitable vulnerability in software," Andrew Brandt, director of threat research at Solera Networks Research Labs, told TechNewsWorld.
There Oughta Be a Law
However, no current laws require full disclosure of all data breaches, Prescient's Irvine pointed out.
It's not known whether other security vendors have had their systems breached -- or, if they have, how many were hit and who they are.
"There is a good probability that other vendors have been compromised, but they determined that the risk of staying quiet was lower than the risk of going public," Joel Bomgar, founder and CEO of Bomgar, told TechNewsWorld.
For Want of a Nail ...
When security vendors remain silent about their systems having been breached, the risk created goes well beyond consumers and enterprises. It may also impact critical infrastructure.
The majority of computers used in government and in the enterprise run Microsoft Windows. That includes companies that provide or own or operate critical infrastructure. Symantec, VeriSign and McAfee are among the major vendors providing security on Windows desktops, so breaches of their systems could have a wide-ranging effect.
Right now, the Cybersecurity Act of 2012 is wending its way through Congress, picking up active opposition from Republicans on its way. The bill seeks to give the United States Department of Homeland Security (DHS) increased power over critical infrastructure owned by the private sector.
Sen. John McCain plans to introduce an alternative bill shortly that reportedly aims to give overall authority for critical infrastructure to the National Security Agency (NSA).
However, "whether it's Sony, RSA, Stratfor or Symantec, no one is spared in the world of organized hacking," Parvin Kothari, founder and CEO of CipherCloud, told TechNewsWorld. Most companies, including security vendors, "do not put enough effort into understanding their exposure."
No Sparing of the Rod
If security firms don't tell anyone that their systems have been breached, and the resulting security vulnerabilities allow hackers penetrate government sites or attack national infrastructure, are they culpable?
That's a difficult question to answer. On the one hand, publicizing a breach might sow fear and panic among enterprise customers. Further, by making it widely known that a vendor's products might be compromised, such information might well spur hackers to ramp up their efforts and, inevitably, find a flaw somewhere.
"A determined, well-funded adversary will almost always succeed [in breaching software or servers] on some level, given enough time and resources to devote to the problem," Solera Networks' Brandt suggested.
Hackers "are always testing new technologies to bypass current security methods and are constantly changing their own processes," Prescient's Irvine said.
On the other hand, by not disclosing news that they've been breached, security vendors risk the wrath of their customers when the event is discovered, especially if their customers have been hacked using information stolen during the breach.
Perhaps the new Cybersecurity Act or a similar law may specify who needs to do what, when and how, and enforce those rules. Until then, all a security vendor's customers can do is update their security systems regularly and pray.