15.10 Using UDP rather than TCP would not be a good idea, because we really do need the reliability of TCP for the web application, and we would just be forced to complicate the HTTP protocol with reliability. We would rather push that ugliness down into the transport layer where it belongs.
    A better approach is the one taken by the designers of HTTP 1.1: they used something called "keepalive" so that one TCP/IP connection could fetch multiple objects without tearing down and setting up the connection for each object.

    17.5 Compare local/client disk block caching with remote/server disk block caching.
    AFS is the epitome of client caching. It tries to keep as much as possible on the local disk of the client, on the theory that a local disk access is likely to be faster than a network access plus a disk access on the remote end. In order to make consistency reasonable under these conditions, it implements session semantics: the last client to close the file wins (in contrast to other file systems, in which the last client to write the file wins).
    At the opposite end of the spectrum is a system like the network disk system Brian presented at the beginning of week 9 (I think). In the network disk system, the file system treated access to remote file blocks completely separately from access to local blocks, and all caching was done remotely by the system which "owned" that piece of data.
    Obviously, from a latency point of view, the closer the data is cached to the client, the better. However, as soon as we push the data closer to the client, consistency becomes a thornier issue. The NFS approach to consistency is to have all of the clients pester the server very frequently. This results in a lot of unnecessary network traffic and server load. The AFS approach is just to redefine consistency until it's weak enough that it's easy to handle.