From: Seth Cooper (scooper@cs.washington.edu)
Date: Tue Oct 12 2004 - 23:05:34 PDT
This paper describes the end-to-end argument. The main idea of the
end-to-end argument is that lower level subsystems of a network should
not implement functionality that will need to be reimplemented at a
higher level in the network. The paper goes on to discuss what some of
this functionality is; for example, delivery guarantees and duplicate
message suppression. The paper also mentions that subsystems may decide
to implement some of this functionality at a lower layer if it will
improve performance and won't incur cost for higher layers that don't
need it.
One of the strengths of the end to end argument (aside from the strong
case the paper makes for it in general) is that it meshes very well with
another guiding principle of the Internet, that of fate sharing.
Whereas fate sharing places the responsibility of maintaining state for
a connection on the end hosts, the end-to-end argument places even more
responsibility on the hosts in the areas of functionality mentioned in
the paper. This lets the subnetwork continue to stay as "dumb",
stateless, and generic as possible by placing demands on the end hosts.
The argument the paper makes for delivery guarantees could be
strengthened, because delivery guarantees can only be made by the end
hosts. The paper mentions that even if a host receives a message
reliably, some disaster may strike it before it can do anything. This
is really not such a big deal, because the disaster could have just as
likely struck before the message was received, and at this point, not
receiving the message is probably of little concern (similar to
fate-sharing). However, if a subsystem is attempting to guarantee
delivery through hop-to-hop confirmations rather than end-to-end
confirmations, a particular hop might confirm receipt of a message, then
go down before it can transmit to the next hop. This is more of a
problem for the end hosts, because the receiving host is still up, but
did not receive the message, and the sending host believes that the
message has been received successfully. Thus, the only real delivery
guarantee can come from the end hosts themselves.
One implication of this paper is that it might be useful to have a set
of various networks aside from the most generic network that can provide
different guarantees, since implementing things in lower layers are
essentially performance trade offs. Each network in this set could
decide what level of reliability it would provide for a given service:
one might do its best to provide delivery guarantees using hop-to-hop
guarantees (thus reducing the actual number of end-to-end
retransmissions necessary), while another might forgo that for low
latency. Then applications could decide which subnetwork best suits
their needs. Applications that needed end-to-end guarantees would still
have to implement them themselves, but they would be able to improve
performance by choosing a subnetwork that most closely met their
end-to-end needs.
This archive was generated by hypermail 2.1.6 : Tue Oct 12 2004 - 23:05:39 PDT