From: Lillie Kittredge (kittredl@u.washington.edu)
Date: Sun Oct 17 2004 - 23:02:43 PDT
Congestion avoidance
This paper represents the Internet development community's firs steps towards
codifying, controlling and avoiding congestion.
The authors offer a number of insights into the nature of traffic congestion,
and some ways to deal with them. I found it interesting that they talk about
the network as "self-clocking", that a sender only puts a new packet in the
network when it receives an ack for the last one it sent. I suspect that this
is no longer the case, particularly when a sender is talking to multiple other
nodes at the same time (though it is not entirely clear if this is addressed in
the paper).
The first technique they offer is that for getting the flow of packets to an
equilibrium: slow-start. In this, a sender first just sends one packet, then
sends a few, then sends more and more until it reaches a maximum.
There are a number of ways in which this reflects an earlier state of the
Internet. I notice that they make rather trusting assumptions of node
behavior. In section 2, I quote, "Assuming that the protocol implementation is
correct", nodes won't add packets when the network is too congested for them
As the Internet has matured, we've had to consider more and more the
possibility that nodes are just up to no good, and may not be obeying protocols
at all.
It's also interesting to see how assumptions have changed; for instance in that
the paper uses packet loss as a signal of congestion, on the assumption that
packets are only lost when they're dropped. Though this is reasonable in wired
networks, on new and noisier media like wireless, this assumption is really no
longer valid.
The last section about putting congestion control in routers makes me curious
where the balance now lies in who does the most congestion control, routers or
hosts.
This archive was generated by hypermail 2.1.6 : Sun Oct 17 2004 - 23:02:44 PDT