What the Future Holds
Systems that might be called "third generation firewalls" - firewalls that combine the features and capabilities of packet filtering and proxy systems into something more than both - are just starting to become available.
More and more client and server applications are coming with native support for proxied environments. For example, many WWW clients include proxy capabilities, and lots of systems are coming with run-time or compile-time support for generic proxy systems such as the SOCKS package.
Packet filtering systems continue to grow more flexible and gain new capabilities, such as dynamic packet filtering. With dynamic packet filtering, such as that provided by the CheckPoint Firewall-1 product, the Morning Star Secure Connect router, and the KarlBridge/KarlBrouter, the packet filtering rules are modified "on the fly" by the router in response to certain triggers. For example, an outgoing UDP packet might cause the creation of a temporary rule to allow a corresponding, answering UDP packet back in.
The first systems that might be called "third generation" are just starting to appear on the market. For example, the Borderware product from Border Network Technologies and the Gauntlet 3.0 product from Trusted Information Systems[6] look like proxy systems from the external side (all requests appear to come from a single host), but look like packet filtering systems from the inside (internal hosts and users think they're talking directly to the external systems). They accomplish this magic through a generous amount of internal tutorialkeeping on currently active connections and through wholesale packet rewriting to preserve the relevant illusions to both sides. The KarlBridge/KarlBrouter product extends packet filtering in other directions, providing extensions for authentication and filtering at the application level. (This is much more precise than the filtering possible with traditional packet filtering routers.)
[6] The same folks who produce the free TIS FWTK discussed throughout this tutorial.
While firewall technologies are changing, so are the underlying technologies of the Internet, and these changes will require corresponding changes in firewalls.
The underlying protocol of the Internet, IP, is currently undergoing major revisions, partly to address the limitations imposed by the use of four-byte host addresses in the current version of the protocol (which is version 4; the existing IP is sometimes called IPv4), and the blocks in which they're given out. Basically, the Internet has been so successful and become so popular that four bytes simply isn't a big enough number to assign a unique address to every host that will join the Internet over the next few years, particularly because addresses must be given out to organizations in relatively large blocks.
Attempts to solve the address size limitations by giving out smaller blocks of addresses (so that a greater percentage of them are actually used) raise problems with routing protocols. Stop-gap solutions to both problems are being applied but won't last forever. Estimates for when the Internet will run out of new addresses to assign vary, but the consensus is that either address space or routing table space (if not both) will be exhausted sometime within a few years after the turn of the century.
While they're working "under the hood" to solve the address size limitations, the people designing the new IP protocol (which is often referred to as "IPng" for "IP next generation" - officially, it will be IP version 6, or IPv6, when the standards are formally adopted and ratified) are taking advantage of the opportunity to make other improvements in the protocol. Some of these improvements have the potential to cause profound changes in how firewalls are constructed and operated; however, it's far too soon to say exactly what the impact will be. It will probably be at least 1997, if not later, before IPng becomes a significant factor for any but the most "bleeding edge" organizations on the Internet. ( describes IPv6 in somewhat more detail.)
The underlying network technologies are also changing. Currently, most networks involving more than two machines (i.e., almost anything other than dial-up or leased lines) are susceptible to snooping; any node on the network can see at least some traffic that it's not supposed to be a party to. Newer network technologies, such as frame relay and Asynchronous Transfer Mode (ATM), pass packets directly from source to destination, without exposing them to snooping by other nodes in the network.
Private IP Addresses
In general, sites should obtain and use IP addresses that have been assigned specifically to them by either their service provider or their country's Network Information Center (NIC). This coordinated assignment of addresses will prevent sites from having difficulties reaching other sites because they've inadvertently chosen conflicting IP addresses. Coordinated assignment of addresses also makes life easier (and therefore more efficient) for service providers and other members of the Internet routing core.
Unfortunately, some organizations have simply picked IP addresses out of thin air, because they didn't want to go to the trouble of getting assigned IP addresses, because they couldn't get as many addresses as they thought they needed for their purposes (Class A nets are extremely difficult to come by because there are only 126 possible Class A network numbers in the world), or because they thought their network would never be connected to the Internet. The problem is, if such organizations ever do want to communicate with whoever really owns those addresses (via a direct connection, or through the Internet), they'll be unable to because of addressing conflicts.
RFC1597[7] recognizes this long-standing practice and sets aside certain IP addresses (Class A net 10, Class B nets 172.16 through 172.31, and Class C nets 192.168.0 through 192.168.255) for private use by any organization. These addresses will never be officially assigned to anyone and should never be used outside an organization's own network.
[7] RFCs (Requests for Comments) are Internet standards documents.
As RFC1627[8] (a followup to RFC1597) points out, RFC1597 doesn't really address the problem; it merely codifies the problem so that it can be more easily recognized in the future. If a site chooses to use these private addresses, they're going to have problems if they ever want to link their site to the Internet (all their connections will have to be proxied, because the private addresses must never leak onto the Internet), or if they ever want to link their site to another site that's also using private addresses (for example, because they've bought or been bought by such a site).
[8] RFC1918 has superseded RFC1627 and made it obsolete.
Our recommendation is to obtain and use registered IP addresses if at all possible. If you must use private IP addresses, then use the ones specified by RFC1597, but beware that you're setting yourself up for later problem. We use the RFC1597 addresses throughout this tutorial as sample IP addresses, because we know they won't conflict with any site's actual Internet-visible IP addresses.