Development of the Domain Name System Mockapetris and Dunlap Perhaps the most interesting aspect of this paper is the simple age of DNS. When this was written, the DNS had been essentially implemented, and they are able to look back on the design decisions and the experience of writing and deploying it, but we are able to look back another 15 years, and see how it has developed from their vision (which is to say hardly at all). The DNS is a fantastic, and often under-appreciated example of a large distributed system. It shows how hierarchy can be used to improve administration and how caching can be used to make a simple system work well in practice. As I see it, there are two important requirements-type points and four significant implementation-related points to consider: Requirements: * DNS needs to handle distributed administration: no one person can hope to deal with all the updates and management of a name-to-address mapping system for the internet. * Names should be independent of network location, although the previous point essentially implies that names are tied to administrative location, which has turned out to be fine. Implementation ideas: * Names are cast into a variable-depth hierarchy. This allows names to expand to fill out the wide variation in entity sizes, and means that we don't have to fix too much about the way names are laid out. As a related issue, the DNS sort-of assumes that the name hierarchy will map to the hierarchy of servers and administration. These don't have to be the same, but the system is easiest to understand when they all match. * Caching (with TTL consistency) reduces the load on servers, and works very well with the hierarchy because the top level domains are most likely to be cached, and can have long TTLs because they won't change as much as the fine-grained information. Caching pushes the load towards the lower levels where there are the resources to handle it (resources due to massive parallelism; they point out that most of the lower-level servers are not nearly as well built and located as the root servers). * DNS separates the resolver from the client. This helps with caching (because we place the resolvers at the point where they serve many clients), but it is also reflective of the internet and the way it was in 1985, because it helps low-performance PCs participate without being bogged down by taking part in the protocol. * Datagrams are used, rather than TCP. This buys much better performance, as long as the requests and responses are small enough to fit in a single packet for atomicity. They ignore the issue of zone transfers, which are large. This implies that the DNS is built around an unreliable model, which also makes things simpler, especially with respect to server failover. I thought it was interesting that the root servers in 1985 were built with implementation diversity, both at the OS and the DNS server levels. I wonder if that is still the case. I also thought the whole subject of how implementors react to protocols was interesting. This is tied to the suggestion that an successful internet protocol is one that can be easily implemented by average (or at worst slightly above average) programmers, and which those programmers will immediately believe they can implement (since there are things that look discouragingly hard even though they are not). The model of the internet at the time (perhaps less so today) was that protocols were successful because people picked them up and implemented them in a variety of systems. But the thing that struck me more was the way that people would implement the protocol until it appeared to work, and not bother tuning or investigating to see if the true behavior matched their mental model. This is especially bad with a system like DNS that uses a simple but hopelessly inefficient fallback case with performance tuning to make the average behavior acceptable. Relevance today is an interesting issue. Of course the DNS is relevant, in that it is a basic protocol to run the internet. But it's interesting to see how little it has changed over the years, and in some ways that is a sign of lack of relevance for the academic work, because the DNS was designed to be flexible and extensible. The paper discusses the development of MX records, but looking back 15 years later we see that that was basically it. New protocols have tended to use naming conventions (e.g. www.domain) rather than DNS extensions.