review of paper 9

From: Shobhit Raj Mathur (shobhit@cs.washington.edu)
Date: Sun Oct 17 2004 - 13:04:08 PDT

  • Next message: Jonas Lindberg: "Review of "Congestion Avoidance and Control""

    Congestion Avoidance and Control
    ================================

    This paper introduces the idea of TCP congestion control mechanism. The
    original TCP/IP protocol did not deal with congestion control. This
    resulted in a series of 'congestion collapses' during 1986 in which
    bandwidth dropped by a factor of thousand. This paper suggests about 7
    new algorithms to be added to the original TCP/IP design to deal with
    congestion control.

    In original TCP/IP protocol hosts would send their packets into the as
    fast as the receiver's advertised window would allow. This was to control
    the flow between the end hosts, but this often resulted in congestion at
    some router resulting in packets being dropped. The hosts would then
    retransmit the dropped packets resulting in more congestion and very low
    throughput. This analysis motivated the author of this paper to come up
    with new algorithms for congestion control. These algorithms are used even
    in the present TCP/IP and are well known as "Additive
    Increase/Multiplicative Decrease" and "Slow Start".

    I will not describe the algorithms here as they are well known aspects of
    TCP congestion control, rather would like to discuss some of the
    interesting results and ideas. I liked the analogy which the author uses
    between the physics of flow and network packets. It is very interesting
    to know that the motivation came from the well studied phenomenon of
    'equilibrium' in physics. The author suggests using ACKs to pace the
    transmission of packets, this makes TCP 'self clocking' and also ensures
    that the data will be injected into the network at most at twice the
    maximum possible rate available on the path. The idea to use an adaptive
    beta (for RTT variation) was good, as it improves performance over varied
    load conditions. The intuitive values for the constants d(0.5) and u(1)
    have proved to be long lasting and are still in use today. It gains
    further importance as the internet applications have completely changed
    over these 15 years but these values still do the trick.

    The author bases most of his arguments (values of d and u, exponential
    backoff etc) on intuition or analogies from other fields such as physics,
    linear system theory etc. Though this is a nice idea, more rigorous
    justifications would be welcome. A packet loss forces the system to go
    back into slow start phase, this may result in unnecessary delays when the
    packet may have been dropped though the network is not congested. The Fast
    Recovery algorithm proposed later on takes care of this. Another algorithm
    proposed later was Fast Retransmit. In this the source does not wait for a
    packet to timeout to realize that it has been lost. Rather, it uses a
    duplicate ACK mechanism to retransmit the packet again. This makes the
    retransmissions faster. The paper also does not address the issue of fair
    distribution of resources. A malicious user may want to blast the network
    with his own packets, other users will then detect a congestion and then
    reduce their windows. This user will then use the increased bandwidth for
    his own use and may ultimately drive out the other users.

    By making minimal changes to the original TCP, the author incorporates the
    algorithms for congestion control in it. Though these algorithms are not
    sufficient for good congestion control mechanism and later on more
    algorithms like Fast Recovery and Fast Retransmit were added, this paper
    addresses one of the most important components of TCP/IP. The
    illustrations and graphs used in the paper make it very readable. I found
    the structure of the paper to be a bit unique; there is no abstract, lots
    of footnotes and 6 pages of appendices. The paper uses the term
    "congestion avoidance" in the title as well as many times in the paper. I
    don't agree totally with this. The algorithms use a packet loss as a
    signal for network congestion. The proposed algorithms increase the load
    on the network till there is a loss which would then indicate a congested
    network. This is the only means to detect the available bandwidth. So it
    is ironical that some of the algorithms are sometimes called congestion
    avoidance when they are intended to cause congestion. Hence in my review I
    have used the term congestion control rather than congestion avoidance.

    On the whole it is a wonderful paper, which sets the base for the
    congestion control mechanism in the TCP/IP protocol. The TCP congestion
    control mechanism has since been modified several times but the essence is
    still the one described in this paper.


  • Next message: Jonas Lindberg: "Review of "Congestion Avoidance and Control""

    This archive was generated by hypermail 2.1.6 : Sun Oct 17 2004 - 13:04:09 PDT