Nearly half of the large organizations that participated in a recent security survey conducted by Infonetics said they relied solely on the security offered by their smartphone operating system. Even the BlackBerry OS, which is considered comparatively robust, “won’t necessarily cover Internet threats including email and Web-borne malware,” according to the report.
There are many common, but inaccurate, assumptions about the security and privacy of smartphones and other handheld converged devices. For many corporate employees today, mobile phones and PDAs have replaced PCs. Enterprise workers are now performing the same functions they previously carried out on their desktop PCs on much smaller devices, virtually anywhere and anytime.
One of the hidden dangers most CSOs and CIOs aren’t adequately addressing these days is rogue code infecting their employees’ mobile phones — or worse, their corporate networks. That’s unfortunate; although no major incidents have been reported yet, it’s only a matter a time before some serious event takes place.
Interestingly, it’s not as though these corporate executives aren’t paying attention to mobile security. After all, there are management tools to disable lost phones and passwords to protect in-use devices. Still, too few enterprise security execs have given enough thought to the possibility that malware downloaded onto these devices could infect the entire network.
Malware can take many different forms. Depending on the type of application, a piece of malware could cause a phone to dial foreign numbers, send an exorbitant number of text messages, or cause some other form of disturbance designed to drive up the user’s phone bill. Or it might flood the network with meaningless messages or render the device inoperable, causing increased help desk costs for the carrier.
The Dawn of a New Mobile Era
Think back to the movie “Independence Day,” in which humans defeated alien invaders by planting a virus in their mother ship’s computer, which led to the destruction of the entire alien fleet. While that movie is a fantasy, hundreds of known malicious software programs have been written to attack mobile devices. Putting a virus into a program that gets distributed via a mobile phone network would be an effective way to infect the entire network.
Consider the danger: Once the mobile device is infected, the malware may tunnel into the network, where it could seek out password or credit card information. As with a time bomb, the enterprise may have no idea of the danger until it is too late.
Millions of enterprise mobile workers frequently transact business using devices powered by BlackBerry, Windows Mobile, Symbian, and other OSes that are vulnerable to viruses and malware unless certain steps are taken. The same criminals spoofing Web sites in order to gain access to individuals’ personal information have figured out that access to enterprise information is far more rewarding. Major hacks into corporate sites already seem like monthly news, but mobile device hacks may be lurking in the wings.
Smartphones today are basically minicomputers, able to browse the Internet and download code from many different places. In fact, many carriers provide download sites for their customers to use as one-stop-shops. In addition, vendors provide applications for many different operating systems.
Scammers can advertise rogue code and point browsers to their Web site to trick users into downloading an application that is not legitimate. Consider a phishing attack, for example, in which an unsuspecting user receives an email with a link to “update” his bank account info. He is then directed either to a rogue Web site where code can be silently downloaded or prompted to download a screensaver or some other application that looks legitimate but is really malware.
The fact is, mobile phones are here to stay and have become woven into the fabric of corporate information processing. Mobile devices once were simply phones; now, they are very intelligent data devices, and they are getting smarter and more robust every day.
This is a classic case of balancing convenience against absolute security. Security professionals need to consider what steps and policies they can adopt to ensure that the applications being downloaded by employees are safe and do not wind up causing a material information breach.
The following three steps are directed at OS providers, so that when they turn on or enforce code signing policies, their policies are strong. These starting points will ensure that CISOs can sleep easier, knowing that steps have been taken to guarantee the authenticity of code authors for mobile devices on their network.
How to Protect Mobile Devices From Malware
Step 1: Make sure code is signed by trusted individuals.
The first step in protecting mobile devices is to ensure that digital certificates are used to authenticate downloaded code. A digital certificate is an ID that contains information about the person, machine or program to whom the certificate was issued. Certificates provide assurance that information comes from a reliable source. In short, a certificate enables digital trust.
Certificates enable developers to sign their work and verify that a program and a particular version of code is what they actually wrote — that is, that it was not subjected to tampering. Mobile phone code developers use certificates today to ensure programs are valid before allowing them to be downloaded to literally millions of devices globally.
The good news is that certificates are inexpensive and, in fact, most mobile device suppliers require that all potentially dangerous code be signed before it is used. Certificates serve as a deterrent to malicious behavior, since they indicate both who signed the code and when the signature was provided. Authors of malware obviously don’t want such information to be known and are unlikely to sign malicious code.
Step 2: Vet code signatures.
As noted, if a company allows workers to download unsigned programs from sites, rogue code could infect a device and then possibly an entire network. Digital signatures are a necessary component of a security solution, but they aren’t enough. For example, how do you know that authors of code are who they say they are? In fact, the process of verifying the identities of authors varies widely.
Typically, certificates are issued to developers after an identity check. A thorough vetting approach would include identity validation through recognized commercial certificate authorities that follow OMTP standards, as well as email address, valid credit card, and identity card (passport or drivers license) checks. Some organizations may even translate foreign documents as part of the vetting process.
Some vendors — Symbian, for example — require developers who create applications using sensitive APIs to fax identity information — passport, driver’s license, etc. — to confirm they are who they say they are. They must also include information about their business and pay with a credit card. However, not all vendors are quite so thorough. Some only require that authors pay a certificate fee with a credit card, which could, of course, be stolen. Little can be done to identify the perpetrator in such cases.
Some operating system manufacturers — again, Symbian is one — require that code be tested by a third-party test house before it gets signed by recognized commercial certificate authorities. The test house runs code through a battery of tests before it puts a seal of approval on it. Then it passes it back to the commercial certificate authorities to sign before being returned to the developer.
Step 3: Require a unique signing key.
Properly done, vetting is about tying together all the disparate loose ends to eliminate any mischief — or make it extremely unlikely. However, there’s one more step that is often missing. Some OS vendors provide certificates that sign the code directly to developers. In theory, that’s fine. As long as the developer uses and stores the certificate properly, security directors can sleep at night. But what if that certificate is given to another developer? Or stolen? Or misplaced? Then the entire security process has been compromised.
The proper way to ensure security is to create a unique signing key for each application so that developers must sign their code with a unique key each and every time they create new software. This can be done with a code-signing portal. The portal ensures the security of the signing key and the integrity of the code. Only through the portal can the code be signed with a key that will allow it to run on the phone. Since criminals don’t like to be identified, this step greatly reduces the risk of rogue code.
Another important advantage of the unique key approach is that bad applications can be rescinded by revoking the certificate for that application. Because each application has a unique certificate, the revocation of the certificate for one application has no effect on others. If a single certificate, such as a developer certificate, is used for multiple applications, then this granular revocation capability is lost.
Enterprises can also take a role in ensuring authenticity. For example, some OS providers do not require applications to be signed, but provide tools for enterprises to manage devices on their network. An enterprise could implement a policy that all code be signed before executing on the device.
For example, the technical overview provided for BlackBerry Enterprise Security Release 4.1.2 includes this statement: “The BlackBerry Enterprise Solution is designed to use IT policies, application control policies, and code signing to contain malware by controlling third-party application access to the BlackBerry device resources and applications. These containment methods are designed to prevent malware that might gain access to the BlackBerry device from causing damage to the BlackBerry device, its applications and its data, or the corporate network.
Today, smartphones are everywhere in corporate life. They provide email and access to data any place, any time. Yet up to now, many corporate IT departments have been slow to address the security issues that have arisen due to the widespread use of smartphones and PDAs.
Mobile devices comprise a multibillion dollar market that continues to grow rapidly while the breadth of functionality expands to rival that of laptops. In short, mobile devices cannot be ignored. Many large enterprises aren’t waiting for mobile equipment providers to maintain the high level of security they want, and they are defining their own stringent requirements to protect their networks.
In such cases, enterprises restrict users from downloading all but specified programs. However, smaller enterprises often don’t have these same safeguards in place. That’s why — for the safety of millions of businesses — digital certificates plus comprehensive vetting should be undertaken to protect networks.
By taking a few simple and inexpensive precautions — including the use of certificates and proper vetting — CSOs and CIOs should be able to sleep more securely, knowing their enterprises are safer.
John Adams is chief technology officer at ChosenSecurity, a provider of on-demand digital certificate management services. He can be reached at [email protected].
Smartphone security has really turned into a hot topic following the demonstration at the Black Hat Briefings. Was this just scaremongering or should we all be genuinely worried about our security? Especially as we put more and more sensitive data on our phones, the risk increases. How do we deal with this risk?
We’ve been discussing this same subject on our blog: http://uimagicinc.com/blog/ Please check us out and leave a comment!