From: Alan L. Liu (aliu@cs.washington.edu)
Date: Tue Oct 05 2004 - 21:50:58 PDT
# What is the main result of the paper?
The paper describes IP, the protocol for interconnecting separate
packet-switched networks, and TCP, a proposed Transmission Control
program that sends packets over IP to create a byte-stream between two
processes that may be on different hosts.
# What strengths do you see in this paper?
The paper is cognizant of the factors crucial to the adoption of any
internetworking protocol. They design IP in such a way as to reduce the
complexity of the gateway interface, making it more economically
feasible. Having an agreed upon addressing scheme to provide information
on where to forward packets seems obvious but makes routing easier. The
key principle is to reduce the amount of global knowledge necessary to
operate an internet. Recognizing that global knowledge is very hard to
acquire, the design of IP is what ultimately allows the Internet to
scale remarkably well, well beyond the authors' imagination.
The strength of the description of TCP is that it provides a
specification whereby one can have a reliable bytestream, something many
networking applications require. The sliding window protocol allows TCP
to handle many different network bandwidth situations using flow
control. It also solves the problem of needing to reuse sequence numbers.
# What are some key limitations, unproven assumptions, or methodological
problems with the work?
The number of addresses is like a certain MSFT chairman's "640K should
be enough for anybody" prediction. i.e., an arbitrary and horribly
underestimate. One problem with this is that there must be almost an
all-or-nothing approach to address usage on the Internet. It's one of
the few things that must be agreed upon by all parties if they hope to
participate with one another.
Another missing part from TCP/IP is the datagram model. The byte-stream
model has no mechanism for latency-sensitive packets. As was later
realized, this causes an uncomfortable separation between the TCP and IP
layers in order to support UDP.
One assumption they made was that a process header was necessary in a
TCP packet. This seems unnecessary because processes can get exclusive
access to certain ports, and therefore knowing the source address and
port and destination port is enough to determine which process should
take the packet.
# How could the work be improved?
An upgrade path should have been considered, in terms of not just
routing information, but also the global information necessary for IP to
work. They took the right approach of trying to keep the gateways simple
and not rely on global information, but the global information they
ended up relying on becomes a weak point of the protocol.
# What is its relevance today, or what future work does it suggest?
Of course TCP/IP has enabled this message to be sent over my DSL
connection in Lake City, through a bunch of routers, out the Qwest
gateway, and then because of some strange peering policy, down to San
Jose, then back up to Seattle, onto campus, and safely (without any
message corruption hopefully!) onto the 561 review site. Sometimes, it
still seems like magic. :)
On the more serious side, some of the issues that they glossed over
included accountability between different networks and ever-changing
associations between two processes as a means for security. The former
is an interesting issue, because commercial interests play a big role in
Internet policy these days. The latter sounds almost like
"port-knocking," except done not just at connection initiation. I wonder
whether that is worth doing over a higher-level end-to-end secure protocol.
This archive was generated by hypermail 2.1.6 : Tue Oct 05 2004 - 21:51:00 PDT