From: Alan L. Liu (aliu@cs.washington.edu)
Date: Tue Oct 12 2004 - 22:45:29 PDT
The end-to-end design principle suggests placing functionality as high
up in a layered system as possible, because at lower layers this
functionality may be not as useful, more costly, and/or redundant.
The paper has several examples of what they mean. A file-transfer
program cannot be assured that there is no failure in the lower layers,
because hardware could always fail. It's unrealistic to make every lower
layer 100% foolproof, so in the end the file transfer application has to
verify it received what it wanted. This means that whether or not we
expend time and energy to engineer reliability in lower layers, we end
up having to do the same check in the end.
Another way of putting things into perspective is that one can never be
sure that a lower layer is doing things correctly, so it is always wise
to handle as much of the functionality at a level one controls.
Everything else is icing on the cake.
The paper also makes it clear that the end-to-end argument is useful
for considering trade-offs, not as some design commandment set in stone.
There are still reasons to put functionality at a lower level. For
instance, preventing most errors at a packet-level instead of whole-file
level allows a file-transfer to react and fix an incorrect transmission
much faster.
This philosophy explains why the Internet is datagram-based.
Applications that do not require reliable byte-stream communication
would not only have no use for this service, but would actually be
hindered by some of its aspects. In the end, the application's
requirements, be they demands on latency, bandwidth, or something else,
are what should drive network functionality.
A problem with the way the paper presents the end-to-end argument is in
the appeal to common sense. There are no facts or figures that
demonstrate that putting functionality at a lower layer is more costly
or only good for performance. Finding support for the former should not
be hard as many systems are engineered for a certain amount of
reliability, for instance. The latter is handled too vaguely. Surely
having a more reliable network would increase the performance of a file
transfer. The question is, how much so, where are there diminishing
returns, and are there quantifiable measures for determining the
appropriate cost/performance ratio?
Although is it true that performance/functionality requirements should
be application-driven, this does not necessarily mean we should engineer
everything for the lowest common denonimator. This forces someone with
high performance needs to have to make a decision as to whether it makes
sense to engineer it on top of this no-guarantee architecture, given
that it exists, or whether a brand new architecture would be more
suitable. Neither of these two options are necessary great, because both
may be hard and place the burden on those that want that particular
application. An alternative would be an architecture that did provide
performance guarantees, because there are enough applications with
vested interest in performance to make it worthwhile and more
cost-effective than always rolling out application-specific solutions.
Is the Internet's direction the correct one? How can we resolve the
tension caused by the end-to-end philosophy and performance needs? These
are all questions I did not put much thought into until I started doing
the reading for this class. I have no idea!
This archive was generated by hypermail 2.1.6 : Tue Oct 12 2004 - 22:45:31 PDT