Rogue Code on Mobile Devices Ought to Keep You Up at Night
IT pros who think their networks are safe might want to consider the mobile devices their employees are using to access those networks. Rogue code can sit, unnoticed, until it's too late. Increasingly, improperly protected mobile devices may be the culprit.
When it comes to maintaining the security of enterprises' mobile devices, many things keep corporate security officers and CIOs up at night -- but rogue code probably isn't one of them. Maybe it should be.
After all, there are management tools to disable lost phones and passwords to protect in-use devices, but too few enterprise security execs have given considerable thought to malware downloaded into these same devices that may infect the entire network.
Think back to the movie "Independence Day," in which humans defeated the aliens by planting a virus in the alien mother ship's computer, which led to the destruction of the entire fleet. While that movie is a fantasy, hundreds of known malicious software programs have been written to attack mobile devices. Putting a virus into a program that then gets "distributed" via the mobile phone network would be the fastest way to infect the entire network. Consider the danger: Once the VPN is infiltrated, it may tunnel into the network, where it could seek out password or credit card information. Like a time bomb, the code may sit there, with the enterprise unaware of the danger until it is too late.
Unfortunately, while millions of enterprise mobile workers frequently transact business through their BlackBerries, iPhones, etc., these devices are vulnerable to viruses and malware unless certain steps are taken. The same criminals spoofing Web sites in order to gain access to your personal information have figured out that access to enterprise information is far more rewarding. And while major hacks into corporate sites seem like monthly news, mobile device hacks are lurking in the wings.
Step by Step
Here are some tips for protecting mobile devices from rogue code:
- Step 1: Make Sure Code Is Signed by Trusted Individuals
The first step in protecting mobile devices is to ensure that digital certificates are used to authenticate downloaded code. A digital certificate is an ID that contains information about the person, machine or program to whom the certificate was issued. Certificates provide you with assurance that what you are about to use comes from a reliable source. In short, a certificate enables digital trust.
If you are a developer, certificates enable you to sign your work and to verify that this program and version of code is the code that you wrote (i.e. it has not been tampered with). Mobile phone code developers use certificates today to ensure programs are valid before being downloaded to literally millions of devices globally.
The good news is that certificates are inexpensive and, in fact, most mobile device suppliers require that all code be signed before it is used. Certificates serve as a deterrent to malicious behavior, since we know both who signed the code and when they signed it. And since authors of malware don't want this information to be known, protection is enhanced.
- Step 2: Vetting
As noted, if a company allows workers to download "unsigned" programs from sites, rogue code could infect the device and then possibly the entire network. Digital signatures are a necessary component of the security solution, but they aren't enough. For example, how do you know that authors of code are who they say they are? In fact, the process of verifying the identity of authors varies widely. Some mobile OS vendors require authors to fax identity information (passport, driver's license, etc.) to confirm they are who they say they are. They must also include information about their business and pay with a credit card. Interestingly, other vendors aren't quite so thorough. In fact, some only require that authors pay a certificate fee with a credit card, which could, of course, be stolen. Little can be done to identify the perpetrator in such cases.
- Step 3: More Vetting
Properly done, vetting is about tying all the disparate loose ends together to eliminate or make extremely unlikely any mischief. But there's one more step that is often missing. Some OS vendors provide certificates that sign the code directly to developers. In theory, that's fine. As long as the developer uses and stores the certificate properly, security directors can sleep at night. But what if that certificate is given to another developer? Or stolen? Or misplaced? Then the entire security process has been compromised. The proper way to ensure security is to maintain the signing key in a portal so that developers must upload their signed code each and every time they create new software. In that way, the portal ensures the security of the signing key and the integrity of the code. Only the portal can sign the code with a key that will allow it to run on the phone. And since criminals don't like to be identified, it greatly reduces the risk of rogue code.
Many large enterprises aren't waiting for mobile equipment providers to maintain this high level of security and are defining their own stringent requirements to protect their networks. In such cases, enterprises restrict users from downloading all but specified programs. But smaller enterprises don't have these same capabilities. That's why for the safety of millions of businesses, digital certificates plus comprehensive vetting should be undertaken to protect our networks. By following these few simple and inexpensive steps -- using certificates and proper vetting -- CSOs and CIOs should be able to sleep more securely, knowing their enterprises are also safer.
Dean Coclin is vice president of business development for ChosenSecurity.