Congestion Avoidance and Control

From: Michael J Cafarella (mjc@cs.washington.edu)
Date: Mon Oct 18 2004 - 01:32:19 PDT

  • Next message: Andrew Putnam: "Review of Congestion Avoidance and Control"

    Congestion Avoidance and Control
    By V. Jacobson

    Review by Michael Cafarella
    CSE561
    October 18, 2004

    Main result:
    The author describes how the then-current TCP/IP stack could
    result in congestive failure. When traffic loads became too great,
    TCP/IP could generate some perverse behavior and end up carrying
    much less than the link capacity. The author has a series of
    pretty interesting tricks for dealing with the problem.

    Slow-start is the technique they suggest hosts use to find
    the correct transmission rate. It often happens that a multihop
    link will have one very slow component, where congestion easily
    accumulates. Hosts may not think much of transmitting a handful
    of packets, but the accumulation of all of these at a congestion
    point can be catastrophic. Better for each host to proceed
    extremely cautiously, to avoid starting congestion at all.

    Hosts should also observe "packet conservation," whereby hosts
    avoid retransmitting duplicate packets. Put another way, they
    only retransmit packets when the sent packets have been lost.
    Doing this efficiently requires a very good estimate of round-trip
    time; too short, and the host creates duplicates; too long, and
    bandwidth sits idle. The authors go through some work to show
    how to handle the variance in the RTT estimate, and how to
    space retransmitted packets.

    Finally, hosts should prevent congestion by watching their
    transmitted packets' error rates. The authors assert that
    good TCP/IP implementations should lose packets due to corruption
    only rarely; more often, packet loss is a sign of downstream
    congestion. This information, too, can be used to guide
    transmission speed.

    Some of the problems in this paper arise from its age. It
    doesn't consider wireless networks, where non-congestive
    packet loss is common. It doesn't consider retransmits'
    impact on stream-like connections, as streaming applications
    were rare. A long-haul link with a bottleneck in the core
    is hard to find these days.

    But I think the paper could have been improved just on its
    own merits by analyzing the ideas in a more integrated way.
    They present a set of algorithms that are clearly similar
    (eg, using packet loss as a signal) and interrelate (eg, inter-packet
    spacing under slow-start), but appear largely as a bunch
    of interesting tricks. Perhaps it could have been done by
    beefing up more of the queueing and model theory at the start
    of the paper.

    The paper's techniques were widely adopted, and today might
    even seem problematic or old-fashioned (esp. with wireless
    networks). However, it's useful to see this material to
    see how what's in most TCP/IP stacks came about.


  • Next message: Andrew Putnam: "Review of Congestion Avoidance and Control"

    This archive was generated by hypermail 2.1.6 : Mon Oct 18 2004 - 01:32:19 PDT