Introduction The Internet is a great technological advance that provides easy access to information and the ability to publish information in revolutionary ways; but it's also a major danger that provides the ability to pollute and destroy information in revolutionary ways. This paper describes one way to balance the advantages and the risks, in order to participate in the Internet while still protecting yourself. The discussion provides a detailed overview that focuses on the variety of relevant issues, recommends strategies for coping with these, and compares their respective advantages and disadvantages. It is not within scope, however, to present the products of specific vendors, since the underlying technologies are rapidly evolving. Problem Description Just what are we trying to protect, anyway? And against what, and from whom? In olden days, brick walls were built between buildings in apartment complexes so that if a fire broke out, it would not spread from one building to another. Quite naturally, the walls were called "firewalls". When you connect to the Internet, you are enabling your users to reach and communicate with the outside world; at the same time, however, you are enabling the outside world to reach and interact with your system. Internet firewalls, in their barest sense, are routers through which the Internet data traffic flows: if intruders attempt unauthorized access to your network, you can stop them at the firewall and not allow them any further into the system. Targets of Attack When you connect to the Internet, you put three things at risk: your computer data, your computer resources, and your reputation. These are vulnerable to various types of attack and attackers. D Your Data: The data on a computer possesses three distinct attributes that need protection: privacy (from undesired access), integrity (from undesired modification), and availability (for desired use). Most often, people focus on the privacy issues; but assuming that such data can be secluded from Internet access via separate machines, why should you care about integrity and availability? Even if your data is not particularly secret, you'll still suffer the consequences if it's destroyed or modified. Some of the costs are obvious: if you lose data, its recovery or reconstruction will carry a price; the loss could also translate to lost sales. Intangible costs include loss of confidence on the part of users, customers, investors, staff, students, etc. D Your Resources: Even if you have data that you don't care about, if other people are going to use your computers, you probably would like to benefit from this use in some way. You spend good time and money on your computing resources, and it's your right to determine how they are used. Intruders often argue that they are using only excess capacity, such that their intrusions don't cost anything; however, it's impossible for an intruder to successfully determine what constitutes excess resources and to use only those. A machine that manifests ample unused disk space and computing time might be just about to begin generating animation sequences that will consume every bit and microsecond; but an intruder can't give back your resources when you want them. D Your Reputation: If an intruder appears on the Internet with your identity, anything he does appears to come from you. What are the consequences? Most of the time, the consequences are simply that other sites (or law enforcement agencies) begin contacting you about your attempts to break into their systems. Sometimes, however, such impostors cost you much more than lost time: an intruder who actively dislikes you, or simply takes pleasure in making life difficult for strangers, may send electronic mail or post news messages that purport to come from you. Generally, people who choose to do this are aiming for maximum hatefulness rather than believability; but even if only a few people believe these messages, the cleanup can be long and humiliating. Anything even remotely believable can do permanent damage to your reputation. Types Of Attack The above targets need protection, but from what? There are many types of attacks against systems, and many ways of categorizing these attacks; but primarily, there are three basic categories, as discussed below: D Intrusion: This is the most common form of attack, whereby unauthorized people are actually able to use your computers. Most attackers want access as if they were legitimate users. While there are dozens of ways to gain access, a firewall helps prevent intrusions ideally by blocking all entry attempts that don't know an account name and password. Properly configured, the firewall reduces the number of accounts that are accessible form the outside and that are therefore vulnerable to guesswork or social engineering. A common strategy is to set up the firewall to use one-time passwords. A firewall also provides a controlled place to log attempts at intrusion, which helps in detecting attacks. D Denial Of Service: This type of attack is aimed entirely at preventing you from using your own computers. Although some cases of electronic sabotage involve the actual destruction or shutting down of equipment or data, most cases simply overwhelm the system by flooding it with electronic mail. However, a clever attacker can also disable services, reroute them, or replace them. Unfortunately, it's close to impossible to avoid all denial of service attacks: the problem is just as likely to occur by accident as it is on purpose. The best strategy is to set up services so that if one of them is flooded, the rest of your site keeps functioning while you find and fix the problem. D Information Theft: Some types of attacks allow an attacker to get data without ever having to directly use your computers. Usually, these attacks exploit Internet services that are intended to give out information, by inducing those services to give out more information than was intended or to disseminate it to the wrong people. Such theft doesn't need to be active or particularly technical. Most attacks seek to gain access to your computers by looking for usernames or passwords. Unfortunately, that's the easiest kind of information to obtain when tapping a network: such information occurs quite predictably at the beginning of many network transactions and can be reused in the same form; such network "sniffing" is much easier than tapping a telephone line. A properly configured firewall will guard against divulging more information than you intend, but once you've decided to send information out across the Internet, it's very difficult to protect against its reaching an unintended audience. Types Of Attackers Whom are we protecting against? Although there are many ways to categorize attackers, they all share certain characteristics. They don't want to be caught, so they try to conceal themselves. If they succeed in gaining access to your system, they will certainly attempt to preserve that access, and if possible, to build additional ways of gaining future access. Most of them have some contact with other people who share these interests (the "underground"), and most will share the information they obtain from attacking your system. The following summary describes the types of attackers that are seen most often: D Joyriders are bored people looking for amusement. They break in because they think you might have interesting data, or because it would be amusing to use your computers, or because they have nothing better to do. They are curious, but not actively malicious: they are just as interested as you are in having your computers up, running, and available; however, they often damage the system through ignorance or by trying to cover their tracks. D Vandals are out to do damage, either because they enjoy destroying things, or because they don't like you: a successful attack won't be a secret. Fortunately, vandals are fairly rare, and tend to have short but splashy careers. But while it's nearly impossible to stop a determined vandal, the resultant destruction, although unpleasant, is usually easy to detect and repair due to its directness. D Score Keepers engage in an updated version of an ancient tradition: they gain bragging rights based on the number and types of systems they've broken into. They don't necessarily want anything you've got, or care in the least about the characteristics of your site. They may or may not do damage on the way through; but if at all possible, they'll use your machines as a platform for attacking others. D Spies are not common. Most people who break into computers do so for the same reason people climb mountains: because they're there. These folks are not above theft and usually steal things that are directly convertible into money or further access. If they find secrets they think they can sell, they may try to do so; but that is not their main business. As far as anyone knows, serious computer-based espionage is much rarer (outside of traditional espionage circles). This type of intrusion is difficult to (ever) detect, since information theft need not leave any traces. In practical terms, most organizations can't prevent spies from succeeding; but you can ensure that your Internet connection isn't the easiest way for a spy to gather information., D Accidents or stupid mistakes cause most disasters, much more so than ill will does. Denial of service incidents, for example, frequently aren't attacks at all; similarly, it's not uncommon for companies to accidentally destroy their own data, or to release it to the world. Firewalls are not designed to deal with this kind of problem. Whether people are deliberately attacking your system or are simply making mistakes, the results are quite similar; but when you protect yourself against evil doers, you also help protect against the more common but equally devastating errors. Firewall Services What approaches can you take to protect against these various kinds of attack? As environments grow larger and more diverse, and as securing them on a host-by-host basis grows more difficult, more sites are turning to a network-oriented security model. With such a model, you concentrate on controlling network access to your various hosts and the services that they offer, rather than on securing them one by one. Network security approaches include building firewalls to protect your internal systems and networks, using strong authentication approaches (such as one-time passwords), and using encryption to protect particularly sensitive data as it transits the network. Definition What is an Internet firewall? Basically, it is a protective tool that provides very effective network security. In most situations, it is the single, most effective way to connect a network to the Internet and still protect that network. In buildings, a firewall is designed to keep a fire from spreading from one part of the building to another. In theory, an Internet firewall serves a similar purpose: it prevents the dangers of the Internet from spreading to your internal network. In practice, it is more like the moat of a medieval castle than a firewall in a modern building: it restricts people to entering at a carefully controlled point; it prevents attackers from getting close to your other defenses; and it restricts people to leaving at a carefully controlled point. Usually, a firewall is a set of hardware components (a combination of routers and host computers) with appropriate software. It is very rarely a single physical entity, although some of the newest commercial products attempt to put everything into the same box; even so, it's not something you can just drop in. Major Functions An Internet firewall is most often installed at the point where your protected internal network connects to the Internet. All traffic coming from the Internet or going out from your internal network passes through the firewall, so that the firewall has the opportunity to ensure that this traffic is acceptable (per your security policy). Thus, a site's firewall can protect hundreds, thousands, or even tens of thousands of machines against attack from networks beyond the pale of its control, regardless of the level of host security on the individual machines. Some of the benefits even extend beyond security, as described below: D Focus for Security Decisions: All traffic passing in and out the firewall must go through this single, narrow checkpoint. This gives you enormous leverage for network security, because it lets you concentrate your security measures on this checkpoint, precisely where your network connects to the Internet. Focusing your security in this way is far more efficient than spreading security decisions and technologies around your network, trying to cover all the bases in a piecemeal fashion. D Enforcement of Security Policy: Many of the services that people want from the Internet are inherently insecure. The firewall is the traffic cop for these services: it enforces the site's security policy, allowing only approved services to pass through, and those only within the rules set up for them. D Logging of Internet Activity: Because all traffic passes through the firewall, the firewall provides a good place to collect information about system and network use, and misuse. As a single point of access, the firewall can record what occurs between the protected network and the external network. D Limitation of Exposure: Sometimes, a firewall can be used to keep one section of your site's network separate from another section. By doing this, you keep problems that impact one section from spreading through the entire network. For example, one section may be more trusted than another, or one section may be more sensitive than another. Non-Functions Firewalls offer significant benefits, but they can't solve every security problem. Like the moat of a medieval castle, a firewall is not invulnerable. It doesn't protect against people who are already inside; it works best if coupled with internal defenses; and, even if you stock it with alligators, people sometimes manage to swim across. A firewall is also not without its drawbacks; building one requires significant expense and effort, and the restrictions it places on insiders can be a major annoyance. Certain threats are outside the control of the firewall; some of these weaknesses are discussed below: D Can't Guard Against Malicious Insiders: A firewall might keep a system user from being able to send proprietary information out of an organization over a network connection; so would simply not having a network connection at all. But that same user could copy the data onto disk, tape, or paper and carry it out of the building in a briefcase. If the attacker is already inside the firewall, a firewall can do virtually nothing for you. Insider threats require internal security measures, such as host security and user education. D Can't Protect Non-Firewall Connections: A firewall can effectively control the traffic that passes through it, but there is nothing it can do about traffic that does not pass through it. For example, what if the site allows dial-in access to internal systems behind the firewall? Also, technically-expert users or system administrators sometimes set up their own "back doors" into the network (such as a dial-up modem connection), either temporarily or permanently. This is a people management problem, not a technical one. D Can't Anticipate New Kinds of Threats: A firewall is designed to protect against known threats. And a well-designed firewall may even protect against some new threats, for example, by denying all but a few trusted services so people can't set up new and insecure ones. However, no firewall can automatically defend against every new threat that arises. Periodically, people discover new ways to attack, using previously trustworthy services or using attacks that simply hadn't occurred to anyone before. You can't set up a firewall once and expect it to protect you forever. D Can't Stop Viruses: Firewalls can't keep viruses out of a network. Although many firewalls scan all incoming traffic to determine whether it is allowed to pass through to the internal network, the scanning is mostly for source and destination addresses and port numbers, not for the details of the data. Even with sophisticated packet filtering or proxying software, virus protection in a firewall is not very practical: there are simply too many types of viruses and too many ways a virus can hide within data. The most practical way to address this problem is through host-based virus protection software, along with user education concerning the dangers of viruses and precautions to take against them. Security Strategies It is important to understand some of the basic strategies employed in building firewalls and in enforcing security at your site. These are just straightforward approaches, as described below: D Least Privilege: This is perhaps the most fundamental principle of (any kind of) security. Basically, this means that every participating entity (user, administrator, program, system, etc) should have only the privileges it needs to perform it assigned tasks, and no more. This is an important principle for limiting your exposure to attacks and their resultant damage, and it suggests that you should explore ways to reduce the privileges required for various operations. However, enforcing this can be complex to implement when it isn't already a design feature of the programs and protocols you're using. D Defense In Depth: Another principle of (any kind of) security prescribes not to depend on just one security mechanism, no matter how strong it may seem; instead, install multiple mechanisms that back each other up. You don't want the failure of any single security mechanism to compromise your security. As already noted, firewalls are not the complete solution to the whole range of Internet security problems. Any security, even the most seemingly impenetrable firewall, can be breached by attackers who are willing to take enough risk and bring enough power to bear; the trick is to make the attempt too risky or too expensive for the attackers you expect to face. You can do this by adopting multiple mechanisms that provide backup and redundancy for each other; these include network security (a firewall with multiple overlapping layers), host security, and human security (user education, careful system administration, etc). Each of these are important and can be highly effective, but you shouldn't place absolute faith in any one of them. In situations where the cost is low, you should always employ redundant defenses. D Diversity of Defense: Just as you may get additional security from using a number of different systems to provide depth of defense, you may also get additional security from using a number of different types of systems. If all of your systems are the same, someone who knows how to break into one of them probably knows how to break into all of them. The idea here is that using security systems from different vendors may reduce the chances of a common bug or configuration error that compromises them all. There is a tradeoff in terms of complexity and cost, however: procuring and installing multiple different systems will be more difficult, take longer, and cost more than for a single or identical systems; it will also require additional time and effort for your staff to learn how to deal with these different systems. But beware of illusionary diversity: simply using different vendors' Unix systems probably won't buy you diversity, because most Unix systems are derived from either the Berkeley or System V source code. There were any number of bugs and security problems in the original releases, and these were propagated into most of the various vendor-specific versions of the operating systems. Many of these versions still have bugs and security problems that were first discovered years ago in other versions from other vendors, but they have yet to be fixed. D Choke Point: This strategy forces attackers to use a narrow channel that you can monitor and control. Anyone who is going to attack your site from the Internet must come through that channel, which should be defended against such attacks. You should be watching carefully for such attacks and be prepared to respond if you see them; but a choke point is useless if there's an effective way for an attacker to circumvent it. For example, a second Internet connection, even an indirect one like a connection to another company that has its own Internet connection elsewhere, can be a threatening breach. While a choke point may seem like putting all your eggs in one basket and thus a bad idea, the key is that it's a basket you can guard carefully. The alternative is to split your attention among many different possible avenues of attack; but then the chances are that you won't be able to adequately defend any of the avenues, or that someone will slip through one of them while you're busy defending another (where a diversion may have been staged to draw your attention away). D Weakest Link: A fundamental tenet of security is that a chain is no stronger than it weakest link, or that a wall is only as strong as its weakest point. Smart attackers are going to seek out that weak point and concentrate their attentions there. You need to be aware of the weak points in your defense, so that you can either take steps to eliminate them, or carefully monitor those you can't. You should try to pay attention evenly to all aspects of your security, so that there is no large difference in how insecure one area is as compared to another. But there will always be a weakest link; the trick is to make that link strong enough and to keep its strength proportional to the risk. In contrast, host-based security models (without a firewall) suffer from a particularly nasty interaction between choke points and weak links: in such models, the lack of a choke point means the number of links is very large, and many of these can be quite weak indeed. D Fail-Safe Stance: Another fundamental principle is that, to the extent possible, systems should fail safe; that is, if they're going to fail, they should fail in such a way that they deny access to an attacker, rather than letting the attacker in. While the failure may also result in denying access to legitimate users (until repairs are made), this is usually an acceptable tradeoff. For example, if a packet filtering router goes down, it doesn't allow any packets through; if a proxying program dies, it provides no service. The major application of this principle in network security is in choosing your site's stance with respect to security decisions and policies. There are two fundamental choices: default denial (specify only what you allow, and prohibit everything else), or default permission (specify only what you prohibit, and allow everything else). It's important to make your stance clear to users and management, and to explain the reasons behind the stance, since they are likely to complain about either decision. D Universal Participation: In order to be fully effective, most security systems require the universal participation (or at least the absence of active opposition) of a site's personnel. If someone can simply opt out of the security mechanisms, then an attacker may be able to attack your site by first attacking that exempt person's system, and then attacking other systems from the inside. Even mundane forms of rebellion will ruin your security; you need everyone to report strange happenings that might be security related, since no one person can see everything. You need people to choose good passwords, to change them regularly, and to not give them out to their friends. D Simplicity: This is a valuable security strategy for two reasons. First, keeping things simple makes them easier to understand; if you don't understand something, you can't really know whether or not it's secure. Second, complexity provides nooks and crannies for all sorts of things to hide in; complex programs have more bugs, any of which could be a security problem. Also, once people begin to expect a given system to behave erratically, they'll accept almost anything from it; this kills any hope of their recognizing and reporting genuine security problems when such problems arise. Design Strategies Until recently, if a site wanted a firewall, they had little choice but to design and build it themselves (perhaps with their own staff, or perhaps by hiring a consultant or contractor). Over the last few years, however, more and more commercial firewall offerings have reached the market. These products continue to grow in number and functionality at an astounding rate, and many sites may find that one of these products suits their needs. But even if you decide to buy a firewall, you still need to understand a fair amount about how they are built and how they work, in order to make an informed purchasing decision. Many sites spend as much or more effort evaluating commercial firewall products as they would in building their own firewall; even so, it's not necessarily easier to buy than it is to build. Also, buying a firewall shouldn't make you reluctant to supplement with freely available tools, just as building one shouldn't make you reluctant to supplement with purchased tools. Several common design approaches are discussed below. Bastion Hosts A bastion host is your public presence on the Internet, much like the lobby of a building. By design, a bastion host is highly exposed, because its existence is necessarily known to the Internet; this is where firewall builders and managers must concentrate security efforts. There are two basic principles for designing and building a bastion host: D Keep It Simple: The simpler your bastion host is, the easier it is to secure. Therefore, it should do as little as possible: it should provide the smallest set of services with the least privileges it possibly can, while still fulfilling its role. D Be Prepared for Compromise: Despite your best efforts to ensure the security of the bastion host, break-ins can occur. Don't be naive about this: only by anticipating the worst and planning for it will you be most likely to avert the problem. This is worth emphasizing, because the bastion host is the machine that's most accessible from the outside world. In case the bastion host is broken into, you don't want that break-in to lead to a compromise of the entire firewall. You can prevent this by not letting internal machines trust the bastion host any more than is absolutely necessary for it to function. You will need to carefully examine each service it provides to internal machines, and determine on a service-by-service basis how much trust and privilege each service really requires. If at all possible, don't allow any user accounts on the bastion host. Keeping such accounts off will give you the best security, since these provide relatively easy avenues of attack. Also, supporting user accounts in any useful fashion requires the bastion host to enable services that could otherwise be disabled on it; every available service provides another avenue of attack, through software bugs or configuration errors. Further, it's usually easier to discern whether everything is running normally on a machine that doesn't have user accounts muddying the waters. But if you must allow such accounts on the bastion host, keep them to a minimum: add accounts individually, monitor them carefully, and regularly verify that they're still needed. Once you've made these decisions, you can use a number of mechanisms to enforce them. For example, you might install standard access control mechanisms (passwords, authentication devices, etc) on the internal hosts, or you might set up packet filtering between the bastion host and the internal hosts. Packet Filtering Packet filtering is a network security mechanism that works by controlling what data can flow to and from a network. These systems route packets between internal and external hosts, but they do it selectively: they allow or block certain types of packets in a way that reflects a site's security policy. The type of router used in a packet filtering firewall is known as a "screening router". An ordinary router simply examines the destination address of each packet and chooses the best way it knows, if any, to send that packet towards its destination. A screening router, however, looks at packets more closely: in addition to determining whether or not it is able to route the packet, a screening router also determines whether or not it ought to do so. This is decided according to the site's security policy which the screening router has been configured to enforce. Advantages Packet filtering offers a number of advantages, as summarized below: D Leverage: This is the key advantage of packet filtering. Since a router naturally presents a useful choke point for all the traffic entering or leaving a network, this allows you to provide, in a single place, particular protections for an entire network. D User Cooperation Not Required: Packet filtering doesn't need any custom software or configuration of client machines, nor does it require any special training or procedures for users. When a screening router decides to allow a packet through, the router is indistinguishable from a normal router. This transparency means that packet filtering can be done without the cooperation, and often without the knowledge, of users: to make it work, they don't need to learn anything new, and you don't need to depend on them to do anything special. D Wide Availability: Packet filtering capabilities are available in many hardware and software routing products, both commercially and freely available over the Internet. Most sites already have packet filtering capabilities available in the routers they use. Disadvantages Although packet filtering provides a variety of advantages, there are also some disadvantages: D Imperfect Tools: Despite its widespread availability, packet filtering is still not a perfect tool. Many products share, to a greater or lesser degree, some common limitations. These include: difficulty in configuring the filtering rules, difficulty in testing the rules when configured, incompleteness of desired capabilities, and security vulnerabilities due to bugs. D Some Protocols Not Well-Suited: Even with perfect implementations, you will find that some protocols just aren't well-suited to security via packet filtering. These include the Unix "remote" commands and RPC-based protocols such as NFS. D Some Policies Not Readily Enforceable: The information that a screening router has available to it doesn't allow you to specify some filtering rules that you might like to have. For example, packets say what host they come from, but generally not from what user; therefore, you can't enforce restrictions on particular users. Similarly, packets say what port they're going to, but not to what application. Logging Regardless of whether a packet is forwarded or dropped, you might want the router to log the action that was taken; this is especially true if a packet is dropped due to a violation of the packet filtering rules. In that case, you would like to know what is being tried that isn't allowed. Also, there is one case that deserves special mention. If the sole router between your internal network and the external world receives a packet from the internal interface and it carries an internal source address, then there is no problem: all packets coming from the inside will have internal source addresses. However, if the router receives a packet from the external interface but it carries an internal source address, this is bad news: it means either that someone is forging the packet (probably in an attempt to circumvent security), or that there is something seriously wrong with your network configuration. Either way, these packets should be logged and treated as urgent issues. Filtering by Address The simplest, although not the most common, form of packet filtering is filtering by address. This lets you restrict the flow of packets based on the source and/or destination addresses of the packets, without needing to consider what protocols are involved. Such filtering can be used to allow approved external hosts to talk to specified internal hosts, for example, or to prevent an attacker from injecting forged packets into your network. However, it's not necessarily safe to trust source addresses, because these can be forged. Unless you use some kind of cryptographic authentication between you and the host you want to talk to, you won't know if you're really talking to that host or to some other machine that is pretending to be that host. The filters will help you when an external host is claiming to be an internal host, but they can't do anything about an external host claiming to be a different external host. If you trust the machines involved at both ends of a connection but not the path, you can use encryption to give you a secure connection over an insecure path. Unfortunately, there are no widespread or commonly available tools for doing this yet, but a number of sites are experimenting with ad hoc solutions; commercial solutions are also beginning to appear. Filtering by Service Blocking incoming forged packets is just about the only common use of filtering done solely by address. Most other uses of packet filtering involve filtering by service, which is somewhat more complicated. Many services can be identified via a packet's TCP port number, since these typically follow well-known conventions. But making filtering decisions based on the source port is not without risks. There is one fundamental problem with this type of filtering: you can trust the source port only as much as you trust the source machine. Suppose you mistakenly assume the source port is associated with a particular service. Someone who is in control of the source machine could run whatever client or server they wanted on a "source port" that you're allowing through your carefully configured packet filtering system. Furthermore, as we've already seen, you can't necessarily trust the source address to tell you for certain what the source machine really is: you can't tell for sure if you're talking to the real machine with that address, or to an attacker who is pretending to be that machine. What can be done about this situation? You should restrict the local port numbers as much as possible, regardless of how few remote ports you allow to access them. Your concern is to limit inbound connections to only ports where you are running trustworthy servers, and to be sure that your servers are genuinely trustworthy. But since many services use random ports, you will often need to accept inbound packets for ports that might have untrustworthy servers on them. Proxy Services Proxying provides Internet access to a single host (or to a very small number of hosts), while appearing to provide access to all your hosts. The hosts that do have access act as proxies for the machines that don't, and they perform what those machines want done. Proxying doesn't require any special hardware, although it does require special software for most services. Proxy services are specialized application or server programs that run on a firewall host; the latter must be either a "dual-homed host" with one interface on the internal network and one on the external network, or it is some other bastion host that has access to the Internet and is accessible from the internal machines. These programs take users' requests for Internet services (such as FTP and Telnet) and forward them, as appropriate according to the site's security policy, to the actual services. Proxy services sit, more or less transparently, between a user on the inside (on the internal network) and a service on the outside (on the Internet). Instead of talking to each other directly, each talks to a proxy; behind the scenes, proxies handle all the communication between users and Internet services. Without proxies, users who want to access Internet services would need to log in to the dual-homed host, do all their work from there, and then somehow transfer the results of their work back to their own workstations. At best, this multiple-step process would annoy most users. But the problem is compounded at sites that have multiple operating systems: the dual-homed host will probably be completely foreign to you; you'll be limited to using whatever tools are available there, and these may be completely unlike (or may seem inferior to) the tools you use on your own system. Worse than this, the dual-homed host itself usually doesn't provide adequate security: it's almost impossible to secure a machine with many users, particularly when those users are explicitly trying to get to the external world. You can't effectively limit the available tools, because your users can always transfer tools from internal machines that are of the same type. For example, on a dual-homed host you can't guarantee that all file transfers will be logged, because people can use their own file transfer agents that don't do logging. Advantages Thus, proxy systems avoid user frustration and the insecurities of a dual-homed host. Instead of requiring users to deal directly with the dual-homed host, proxy systems allow all interaction to take place behind the scenes. The advantages are summarized below: D Transparency: This is the major benefit of proxy services. To the user, a proxy server presents the illusion that the user is dealing directly with the real server; and to the real server, the proxy server presents the illusion that the real server is dealing directly with a user on the proxy host (as opposed to the user's real host). D Effective Logging: Because proxy servers understand the underlying protocol, they allow logging to be performed in a particularly effective way. For example, instead of logging all of the data transferred, an FTP proxy server logs only the commands issued and the server responses received; this results in a much smaller and more useful log. Disadvantages Although proxy services provide these advantages, there are also a number disadvantages: D Marketplace Lag: Although proxy software is widely available for the older and simpler services like FTP and Telnet, proven software for newer or less widely used services is harder to find. There's usually a distinct lag between the introduction of a service and the availability of proxying servers for it; the length of the lag depends primarily on how well the service is designed for proxying. D May Need Different Servers Per Service: You may need a different proxy server for each protocol, because the proxy server has to understand the protocol in order to determine what to allow and disallow, and in order to masquerade as a client to the real server and as the real server to the proxy client. D Required Modifications: Except for a few services designed for proxying, proxy servers require modifications to clients and/or procedures. Either kind of modification has drawbacks; people can't always use the readily available tools with their normal instructions. Because of these modifications, proxied applications don't work as well as non-proxied applications: they tend to bend protocol specifications, and some clients and servers are less flexible than others. D Not Workable For Some Services: Proxying relies on the ability to insert the proxy server between the client and the real server; that requires relatively straightforward interaction between the two. Services that have complicated and messy interactions may never be possible to proxy. D Vulnerable To Protocol Weaknesses: As a security solution, proxying relies on the ability to determine which operations in a protocol are safe. Not all protocols provide easy ways to do this. The X-Window System protocol, for example, provides a large number of unsafe operations, and it's difficult to make it work while removing the unsafe operations. HTTP is designed to operate effectively with proxy servers, but it's also designed to be readily extensible: it achieves that goal by passing data that's going to be executed. It's impossible for a proxy server to protect you from the data: it would need to understand the data being passed and determine whether it was dangerous or not. Architectural Choices Your firewall will probably have multiple layers and comprise a variety of components. The text below presents some basic ways to assemble various firewall components; of course, there are many, more complex variants: D Multiple Packet Filters: One type of architecture supports multiple packet filters: it's set up that way because the two filters need to do different things; but it's quite common to set up the second one to reject packets that the first one should have rejected already. If the first filter is working properly, those packets will never reach the second; however, if there's some problem with the first, then hopefully you'll still be protected by the second. D Dual-Homed Host: This type of architecture is built around the dual-homed host, a computer that has at least two network interfaces: it sits between the Internet and the internal network, but as a firewall, it has the normal routing function disabled. Dual-homed hosts can provide a very high level of control: if you aren't allowing packets to flow between external and internal networks at all, you can be sure that if any packet is found on the internal network that has an external source, it is evidence of some kind of security problem. Also, as already discussed, this architecture can only provide services by proxying them or by requiring users to log in to the dual-homed host directly. D Screened Host: Whereas a dual-homed host architecture provides services from a host that's attached to multiple networks (but that has routing disabled), a screened host architecture provides services from a bastion host that's attached only to the internal network, while the latter uses a separate screening router to connect to the Internet. The primary security comes from the router's packet filtering. For example, this prevents people from going around proxy servers to make direct external connections. Also, the packet filtering is set up such that the bastion host is the only internal system that hosts on the Internet can open connections with (for example, to deliver incoming mail). Because this architecture allows packets to flow from the Internet to the internal network, it may seem more risky than a dual-homed host approach, where no external packet can reach the internal network. In practice, however, the dual-homed host architecture is unexpectedly prone to failures that let packets actually get through. Furthermore, it's easier to defend a router, which provides a very limited set of services, than it is to defend a host. For most purposes, the screened host architecture provides both better security and better usability than the dual-homed host architecture; the major disadvantage, however, is that if an attacker manages to break in to the bastion host, no network security remains between it and the remaining internal hosts. D Screened Subnet: This architecture adds an extra layer of security to the screened host architecture, by adding a perimeter network that further isolates the internal network from the Internet. Why do this? By their nature, bastion hosts are the most vulnerable machines on your network; despite your best efforts to protect them, they are the machines most likely to be attacked, because they're the machines that can be attacked. If someone breaks in to the bastion host in a screened host architecture, he's hit the jackpot; but by isolating the bastion host on a perimeter network, you can reduce the impact of a break-in on the bastion host. It is no longer an instantaneous jackpot: it gives an intruder some access, but not unlimited access. These are just a few of the architectural possibilities. Systems that might be called third generation firewalls are also starting to become available: these combine the features and capabilities of packet filtering and proxy systems into something more than both. But while firewall technologies are changing, so are the underlying technologies of the Internet itself (IP, for example). These changes will necessarily lead to corresponding changes in the firewalls of the future. Conclusion The Internet is a rapidly-evolving technology that provides easy access to global information and the ability to publish the information in revolutionary ways; but at the same time, it also provides the ability to pollute and destroy information in revolutionary ways. This paper has described one way to balance the advantages and the risks, in order to participate in the Internet while still protecting yourself, via Internet firewalls. The discussion provided a detailed overview that focused on the variety of relevant issues, recommended strategies for coping with these, and compared their respective advantages and disadvantages. However, firewall technologies continue to change rapidly, and so do the underlying technologies of the Internet itself; these changes will require corresponding changes in the firewalls of the future. Some of these improvements have the potential to cause profound changes in how firewalls are constructed and operated; however, it's far too soon to say exactly what that impact will be. References 1: Chapman and Zwicky; Building Internet Firewalls, November 1995. O'Reilly & Associates. 2: Cheswick and Bellovin; Firewalls and Internet Security: Repelling the Wily Hacker, April 1995. Addison-Wesley. 3: Peterson and Davie; Computer Networks: A Systems Approach, 1996. Morgan Kaufmann Publishers, Inc. 4: Siyan and Hare; Internet Firewalls and Network Security, 1995. New Riders Publishing. Glossary Definitions Currently, a variety of terms are in use for referencing firewall concepts and technology. Unfortunately, there is no completely consistent terminology for firewall architectures and components: different people commonly use terms in different or conflicting ways. Also, these same terms sometimes have other meanings in other networking fields; hence, the definitions below assume a firewall context. Internet The global internet based on the Internet (TCP/IP) architecture, connecting millions of hosts worldwide. firewall A component or set of components that restricts access between a protected network and the Internet, or between other sets of networks. host A computer system attached to one or more networks, that supports users and runs application programs. bastion host A computer system that must be highly secured because it is vulnerable to attack, usually because it is exposed to the Internet and is a main point of contact for users of internal networks. It gets its name from the highly fortified projections on the outer walls of medieval castles. dual-homed host A general-purpose computer system that has at least two network interfaces (or homes). packet The fundamental unit of communication on the Internet. packet filtering The action a device takes to selectively control the flow of data to and from a network. Packet filters allow or block packets, usually while routing them from one network to another (most often from the Internet to an internal network, and vice versa). To accomplish packet filtering, a set of rules is set up that specifies what types of packets are allowed (e.g., those to or from a particular IP address or port) and what types are to be blocked. Packet filtering may occur in a router, in a bridge, or on an individual host. screening router A router used in a packet filtering firewall, to either forward or block packets according to the site's security policy that it enforces. perimeter network A network added between a protected network and an external network, in order to provide an additional layer of security. proxy server A program that deals with external servers on behalf of internal clients. Proxy clients talk to proxy servers, which relay approved client requests to real servers, and then relay answers back to clients. protocol A specification of an interface between modules running on different machines, as well as the communication service that those modules implement. Acronyms FTP (File Transfer Protocol) The standard protocol of the Internet architecture for transferring files between hosts; built on top of TCP. HTTP (HyperText Transport Protocol) An application-level protocol based on a request/reply paradigm used in the World Wide Web; it uses TCP connections to transfer data. IP (Internet Protocol) A protocol that provides a connectionless, best-effort delivery service of packets across the Internet. NFS (Network File System) A popular distributed file system, developed by Sun Microsystems. RPC (Remote Procedure Call) TCP (Transmission Control Protocol) A connection-oriented transport protocol of the Internet architecture, that provides a reliable, byte-stream delivery service. WWW (World Wide Web) A hypermedia information service, built on the Internet.