From: Alan L. Liu (aliu@cs.washington.edu)
Date: Sun Oct 03 2004 - 23:03:03 PDT
* What is the main result of the paper?
The Internet as we know it, in the form of a packet-switched facility
based on the datagram model, was built to allow different networks to
interoperate. The set of goals and their order of importance greatly
influenced the architecture.
* What strengths do you see in this paper?
The paper describes the foremost goal that gave rise to the Internet,
namely the goal of creating an effective interconnection between
existing (and future) computer networks. The Internet needed to be
flexible in order to support widely disparate networks, so heterogeneity
was a built-in assumption to its model. What was strong about the paper
was that it made it clear that had all the goals of the project been
ranked differently, a vastly different architecture could have arisen.
This drives home the point that tradeoffs are necessary in creating the
Internet protocols -- there is no one solution that serves all the needs
of its users.
The paper also does a good job at explaining what the consequences of
many of the design decisions were. For instance, by having a datagram
model instead of forcing a byte-stream model, one could support
streaming data where the loss of certain packets is not critical, but
latency is. On the other hand, the byte-stream model is critical for
applications where one needs the data to all be transferred correctly,
while latency is a lesser concern to bandwidth.
* What are some key limitations, unproven assumptions, or methodological
problems with the work?
One problem is that the paper mentions how successful the Internet is,
without providing metrics and a definition of "success," in much the
same way that the author admits that the central goal of designing an
"effective interconnection" was without a definition of "effective." So,
is the Internet really a success? One might consider it a success if the
higher-ranked goals were mostly met, but today's Internet is almost the
opposite of the Internet as it was meant to be used two decades ago. In
fact, the paper mentions that a "commercial" Internet would look vastly
different, which is interesting because commercial interests now play a
significant role in how the Internet is used today.
* How could the work be improved?
The paper could be improved if it had concrete data to back up its
claims. Although many of the statements seem self-evident, there are a
lot of underlying assumptions. For example, when explaining the overhead
of a packet due to its header, the argument is that a file transfer
makes that overhead negligible. But what percentage of all connections
are for file transfers? What overhead is acceptable for remote terminal
connections? What quantifiable impact do the overheads cause, not just
in terms of overhead but in terms of the Internet user experience? These
questions and more could have been investigated and addressed, or at
least attempted, instead of only being worth mentioning in passing.
* What is its relevance today, or what future work does it suggest?
The fact that today's Internet has a more commercial bent suggests that
the decision to rely on packet-switching instead of virtual circuits
must be revisited. The fact that resource accountability was negligible
in the original design allows Denial-of-Service attacks to become a
major problem in the current Internet, and raises questions that need to
be further explored, such as whether there is any good way of fixing the
problems now caused by not meeting the goals, or whether a new
interconnection network is necessary because the only solution is to
start from scratch with a different order of goals, or if the solution
is something in between the current Internet and a complete redesign.
This archive was generated by hypermail 2.1.6 : Sun Oct 03 2004 - 23:03:05 PDT