B2B Marketers » Reach Pre-Qualified IT Decision Makers with a Custom Lead Gen Program » Get Details
Welcome Guest | Sign In
Salesforce Commerce Solution Guide

Next-Gen Firewalls Make Old Arguments New Again

By Avishai Wool
Sep 28, 2011 5:00 AM PT

The last few years have brought us arguably the most significant change in firewall technology in decades. Ever since "stateful inspection" was introduced by Check Point in the late 1990s, firewall administrators and information security officers have been defining security policies based primarily on a connection's source IP address, destination IP address and service.

Next-Gen Firewalls Make Old Arguments New Again

Now, with the so-called next-generation firewalls (NGFWs) promoted by Palo Alto Networks and Check Point R75, policy can also be defined based on the application.

Through some pretty impressive technological advances, these devices can discriminate between applications that share the same port. NGFWs can enforce fine-grained policies like "block file-swapping applications," or "allow Facebook but not its game applications," or even "block the super-sneaky Skype application" -- while allowing benign HTTP traffic through the firewall. The sales-pitch is indeed very compelling for many security-conscious organizations, and lots of organizations are indeed embracing the new technology.

Building a Better Firewall

However, once we are past the excitement over the cool new technology (and it is indeed cool!), we have to realize that NGFWs need to be managed. This will require some thought and planning, particularly around "blacklisting" versus "whitelisting."

Fifteen years ago, there was a raging debate among firewall administrators about how a good firewall policy should be structured. The blacklisting proponents suggested to "allow everything, and block the traffic you don't want," while the whitelisting aficionados argued to "block everything, and only allow the traffic you need." This debate was won by a landslide in favor of the more secure whitelisting approach: Today, practically every firewall policy has a "default drop" rule, and a great number of "allow" rules. Further, most regulations require such a structure to be in compliance.

However, this more secure approach has a cost: whitelisting puts a significant workload on firewall administrators. This is because every new connection potentially requires yet another firewall rule -- which has to be planned, approved, implemented and validated. Some organizations process hundreds of such rule-change requests every week, and as a result, they suffer turnaround times of several weeks between change request and implementation.

Many corporate firewall policies have already ballooned into monsters totaling thousands of rules.

A Fresh Look

Such giant policies are extremely difficult to keep secure -- and they invariably contain a surprisingly high number of errors. In fact, research has demonstrated that there is a clear correlation between policy complexity and the number of errors in the policy; for firewall policies, "small is beautiful." Now imagine what will happen if instead of a single (albeit crude) rule allowing HTTP, the policy will include 10,000 new rules, one per application? Without some careful design, the new policy could be even less secure.

With the advent of NGFWs, the blacklisting/whitelisting debate deserves a fresh look and a conscious choice. Consider this: If you decide to whitelist at the application level (i.e., block outbound tcp/80 and only allow those Web-applications you know about), how many more change requests per week will you be processing? Can your existing team handle the extra load without degradation to turnaround time? How does this impact your risk posture?

Furthermore, perhaps CISOs will find it easier to define policy via blacklisting, via rules like "block social networks, file sharing and video streaming, and allow all other Web traffic."

As anecdotal evidence, compare how filtering Web-proxies and Web-application firewalls (that do a similar job using different technologies) are configured. Blacklisting appears to be the more common approach for Web-proxies, although there are some organizations that whitelist. Should NGFWs follow the Web-proxy blacklist style, or should they follow the classical firewall's whitelist approach?

Most of what is written about NGFWs has been about the technology. But what about the management challenges? We should be arguing about them! What do the regulators (PCI-DSS, NERC, NIST) say? What should the internal audit guidelines be (CobiT)? How about Managed Security Service Providers (MSSPs)?

We are going to have a few interesting years until the dust settles.

Avishai Wool is chief technology officer and cofounder of AlgoSec.

Salesforce Commerce Solution Guide
When using a search engine, how often do you look beyond the first page of results?
Never -- There's always enough information on the first page to meet my needs.
Rarely -- There's usually enough on the first page, but sometimes I want to see more.
Occasionally -- If there are too many paid-for results, or if I don't find an answer on the first page.
Often -- Even if there's enough information on the first page, I like to know what else is available.
Always -- First page search results are rigged; I don't want to be limited to what an algorithm highlights.
Salesforce Commerce Solution Guide
Salesforce Commerce Solution Guide