Review of Congestion Avoidance and Control

From: Andrew Putnam (aputnam@cs.washington.edu)
Date: Mon Oct 18 2004 - 01:32:58 PDT

  • Next message: Craig M Prince: "Reading Review 10-18-2004"

    Congestion Avoidance and Control
    Van Jacobson

    Summary: The author presents congestion control mechanisms for TCP
    that seek to prevent network collapses due to congestion. The primary
    mechanisms are exponential transmission backoff and slow starting.

    The congestion collapses of the Internet were caused by poor congestion
    control in the implementation of TCP, the dominant transmission
    protocol on the Internet. In control system theory, this is expected
    since the system reaches capacity exponentially but does not reduce the
    input by a comparable rate.

    One key to preventing congestion collapse is to ensure that senders do
    not over-utilize links at the beginning of the transmission. To solve
    this, the author implemented a slow-start mechanism that requires the
    initial transmissions to use very low bandwidth, and gradually increase
    utilization until it reaches capacity. The problem with this policy is
    that Internet traffic is bursty, and hence the time required to find
    the optimal bandwidth may be comparable to the length of the
    transmission, meaning a significant amount of bandwidth is wasted. The
    author feels that this is insignificant since the increase in bandwidth
    utilization is super-linear.

    The next important change was a dynamic update to the RTT estimate,
    which determines the packet retransmission rate. Since congestion
    changes exponentially, the retransmission rate may be too high, causing
    packets to be retransmitted when they are actually still in transit in
    the network. This further increases network congestion. The dynamic
    update to the congestion parameter helps prevent these retransmissions.

    Another critical change is the exponential backoff mechanism. When the
    network becomes congested, senders slow their transmission rate
    exponentially until the network has recovered. After that, the
    receivers slowly increase their utilization to take advantage of the
    additional bandwidth.

    There are several questionable aspects of this paper. First and most
    troublesome is the assumption that network congestion is essentially a
    boolean condition: it is either congested or not. This seems to be a
    gross oversimplification, and leads to erratic behavior when the
    network is operating at high load. A better approach is to have some
    kind of variable (multi-bit) value to indicate congestion level. This
    would allow senders to adapt in less drastic ways than exponential
    backoff, which keeps network utilization high.

    Another problem is the blurry line between bandwidth utilization
    problems and fairness problems. While fairness is certainly a related
    problem, ensuring fairness is not part of ensuring congestion control.
    The goals should be handled separately so that there are not false
    dependencies between congestion control and fairness regulation.

    Finally, the assumption that packet loss is almost always due to
    congestion is probably too strong an assumption, particularly in the
    presence of wireless networks and faulty hardware. Using dropped
    packets as the network congestion indicator also does not provide a
    particularly clear threshold indicating congestion, so it is somewhat
    difficult to predict how how congestion control will work. However, the
    choice is understandable since other implementations would require
    software changes in every gateway on the Internet.


  • Next message: Craig M Prince: "Reading Review 10-18-2004"

    This archive was generated by hypermail 2.1.6 : Mon Oct 18 2004 - 01:33:01 PDT