Review of "End-to-End Arguments in System Design"

From: Ethan Katz-Bassett (ethan@cs.washington.edu)
Date: Wed Oct 13 2004 - 00:27:56 PDT

  • Next message: Jonas Lindberg: "End-To-End Arguments in System Design"

    This paper lays out the authors' arguments in favor of end-to-end functions
    and against low-level functions on networks. The demands of an individual
    application shape the requirements to such a degree that the functions
    necessitated must be implemented by the application. Furthermore, moving
    the functions to any lower level introduces the possibility of error or of
    interfering with the requirements of the application. The authors recommend
    low-level implementation only as a possible performance enhancement. The
    paper presents a design principle rather than a specific development. I
    enjoyed the paper for its reasoned, coherent approach. The file transfer
    example clearly illustrates the principles of end-to-end arguments. The
    short examples at the end also help reinforce the arguments. I had never
    before seen the end-to-end argument applied outside of the network domain,
    as in the RISC example. I am curious to see where else the argument has
    been knowingly applied.

     

    The paper mentions a "host sophisticated enough that . it also accepts
    responsibility for guaranteeing that the message is acted upon;" I do not
    understand how such a guarantee is enforceable.

     

    Obviously, QoS has been and remains a hot topic in networks. QoS seems to
    have a complicated relationship with end-to-end arguments. On the one hand,
    pushing everything to the end hosts renders some service models impossible.
    On the other hand, putting services into the network increases the amount of
    processing and leads to the sorts of problems outlined in the paper. A
    system in which an application informs the network of its specific needs
    probably combines QoS and end-to-end concerns, but would be complicated.
    Presumably, if the Internet allowed different classes of service, we might
    achieve a middle ground of sorts (in which processes could pick a best match
    from a set of service-types); I believe the IP header has space for this,
    but that most routers do not implement it. As an example, realtime
    streaming video needs a certain guaranteed throughput to operate correctly.

     

    The end-to-end argument helped get the Internet to where it is today. By
    not assuming much at a low level about the services processes want, IP
    provides a vehicle on which new technologies with different demands are
    constantly added. End host processes do not know the specifics of the
    protocols used by the networks over which their data will travel. In
    general, this fact argues for end-to-end implementations; the applications
    do not know what happens in the middle, and so need to provide themselves
    any guarantees. In certain examples, as we've discussed, the networks may
    even disrupt the wants of the processes (VoIP being a canonical example).
    The end-to-end design principle shapes how developers work and forces them
    to consider the specific demands they place on the network and work to
    guarantee their own demands. However, by placing these demands on the end
    processes, the Internet in a sense "passes the buck;" by not providing
    services itself, the Internet places responsibility elsewhere. Placing this
    responsibility on hosts opens up the possibility for types of abuse; the
    network does not, for instance, keep track of where a packet actually
    originated, making it difficult to track attacks.

     


  • Next message: Jonas Lindberg: "End-To-End Arguments in System Design"

    This archive was generated by hypermail 2.1.6 : Wed Oct 13 2004 - 00:28:14 PDT