The case for wider use of Next Generation Firewalls

Cyberattacks have gotten very sophisticated, to say the least.

Traditional perimeter firewalls are still in wide use as a fundamental defense mechanism.  But a group of security vendors are pushing for wider use of so-called Next Generation Firewalls, or NGFWs, that integrate firewall, intrusion detection and prevention, application monitoring and authentication and policy-use  controls.

These vendors include NSS Labs, Barracuda, Check Point, Cisco, Fortinet, Juniper, Palo Alto Networks and SonicWall. In this LastWatchdog guest post, AlgoSec’s CTO, Professor Avishai Wool, of Tel Aviv University, makes the technical argument for more pervasive use of NGFWs. (Clarification. 02Nov2010. NSS Labs tests security products, including firewalls,  and publishes the results.)

Wool

By Avishai Wool

The last few years have brought us arguably the most significant change in firewall technology in decades. Ever since “Stateful Inspection” was introduced by Check Point in the late 1990’s, firewall administrators and information security officers have been defining security policies based primarily on a connection’s source IP address, destination IP address, and service.

Now, with the so called “Next Generation” firewalls (NGFWs) promoted by Palo Alto Networks and Check Point R75, policy can also be defined based on the “application”.

To understand why this technical detail is an exciting development for organizations, we need a bit of background. Almost all organizations let their users browse the net. From a firewall point of view, this policy is implemented by allowing the “http” service (technically, tcp on port 80) from the internal net, to anywhere.

The trouble is that application programmers have realized this policy, and have adjusted: Almost every web-application now communicates over tcp/80. Since this port is practically certain to be open, there is no need for the application users to ask for a new rule through the firewall; the application will “just work”. This is very convenient for application developers, and also for application users.

But it is a serious concern for information security officers, because not all web-applications are born equal. While many web-applications are important business tools, others are not: some are inappropriate (think file-swapping applications), some are vectors for sensitive data loss (like personal network storage), and others are bandwidth hogs (like streaming video apps).

And lurking among all these we have the real nasty apps: cyber-warfare tools, corporate espionage trojans, identify-stealing ‘bots, viruses and worms, etc. And all these apps use tcp/80 – the good, the bad, and the ugly.

This leaves the information security officer with an unpleasant choice: Either block all the applications that use tcp/80, and disrupt business in a major way – or allow all apps, and assume the risk. Practically every firewall policy I have seen chooses business continuity over safety, and keeps tcp/80 open – with the associated heartburn for CISOs everywhere.

Now enter NGFWs. Through some pretty impressive technological advances, these devices can discriminate between applications that share the same port. NGFWs can enforce fine-grained policies like “block file-swapping applications”, or “allow Facebook but not its game applications”, or even “block the super-sneaky Skype application” – while allowing benign http traffic through the firewall.

The sales-pitch is indeed very compelling for many security-conscious organizations, and lots of organizations are indeed embracing the new technology.

However, once we are past the excitement over the cool new technology (and it is indeed cool!), we have to realize that NGFWs need to be managed. This will require some thought and planning. I’d like to raise two points you should think about when you are considering NGFWs.

The first point is policy granularity. For many years firewall policies were defined at a crude “service” granularity – lumping thousands of applications into a single “service”. And still, many corporate firewall policies have ballooned into monsters totaling thousands of rules.

Such giant policies are extremely difficult to keep secure – and invariably contain a surprisingly high number of errors. In fact, my research has demonstrated that there is a clear correlation between policy complexity and the number of errors in the policy; For firewall policies, “small is beautiful”.

Now imagine what will happen if instead of a single (albeit crude) rule allowing http, the policy will include 10,000 new rules, one per application… Without some careful design, the new policy could be even less secure just because of all the new errors that will creep in.

The second point is about “blacklisting” versus “whitelisting”. Fifteen years ago there was a raging debate among firewall administrators about how a good firewall policy should be structured. The “blacklisting” proponents suggested to “allow everything, and block the traffic you don’t want”, while the “whitelisting” aficionados argued to “block everything, and only allow the traffic you need”.

This debate was won by a landslide in favor of the more secure “whitelisting” approach: Today practically every firewall policy has a “default drop” rule and a great number of “allow” rules. Further, most regulations require such a structure to be in compliance.

However, this more secure approach has a cost: whitelisting causes a significant workload on firewall administrators. This is because every new connection potentially requires yet another firewall rule – which has to be planned, approved, implemented, and validated. Some organizations I’ve spoken to process hundreds of such rule-change requests every week, and as a result, suffer turnaround times of several weeks between change request and implementation.

With the advent of NGFWs, I think the blacklisting/whitelisting debate deserves a fresh look, and a conscious choice. Consider this: If you decide to whitelist at the application level (i.e., block outbound tcp/80 and only allow those web-applications you know about) – how many more change requests per week will you be processing? Can your existing team handle the extra load without degradation to turnaround time? Will you require additional headcount?

Furthermore, perhaps CISOs will find it easier to define policy via blacklisting, via rules like “block social networks, file sharing and video streaming, and allow all other web traffic”?

As anecdotal evidence, compare how filtering web-proxies and web-application firewalls (that do a similar job using different technologies) are configured. As far as I can tell, blacklisting is the more common approach for web-proxies, although I have spoken to some organizations that whitelist. Should NGFWs follow the web-proxy blacklist style – or should they follow the classical firewall’s whitelist approach?

So far most of what I’ve read about NGFWs has been about the technology. But what about the management challenges? We should be arguing about them! What do the regulators (PCI-DSS, NERC, NIST) say? What should the internal audit guidelines be (CobiT)? How about Managed Security Service Providers (MSSPs)? What are the vendors teaching in their NGFW configuration classes?

I think we’re going to have a few interesting years until the dust settles.

Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInEmail this to someone