Review of "Congestion Avoidance and Control"

From: Ethan Katz-Bassett (ethan@cs.washington.edu)
Date: Sat Oct 16 2004 - 23:32:04 PDT

  • Next message: Jenny Liu: "review of "Congestion Avoidance and Control""

    In this paper, the author presents a series of algorithms—tweaks to TCP—to
    deal with the “congestion collapses” the Internet had recently experienced.
    TCP congestion control methods suggested by Van Jacobson, including additive
    increase/multiplicative decrease, slow start, and fast transmit (mentioned
    but not described in the paper), are still in use today. Similar to Clark’s
    paper on the motivations behind the Internet protocols, this paper helps us
    understand how and why TCP is the way it is. Since it is used now more than
    ever, we should understand the challenges that created the various
    mechanisms of TCP; we can then better know what is still relevant and what
    should be reexamined.

     

    The author structures the paper around the three types of packet
    conservation failure, rather than around the algorithms. I enjoyed this
    structure; it forefronts the motivation. Slow start seems an appropriate
    mechanism to help connections reach equilibrium. The next section, on
    conservation once at equilibrium, is more problematic. Claiming that “only
    one scheme [exponential backoff] will work,” without being able to prove it,
    seems cheap. In the third section, I liked the explanation of why a lost
    packet represents a “’network is congested’ signal;” a scheme that uses
    mechanisms already built into the network (and hence requires less
    reengineering) is much easier to adopt. The end-host mechanism does not
    require modifications to routers, and the gateway mechanism (mentioned in
    section 4) does not require modifications to hosts.

     

    The fact that ˝ and 1 are still used as the decrease and increase terms
    seems to point to the power of inertia in system design. The author admits
    that 1 is “almost certainly too large.” And, if I interpret it all
    correctly, ˝ seems too conservative in a system in which links routinely
    carry more than 2 simultaneous flows. On the other hand, being too
    conservative is certainly better than being overly aggressive under serious
    congestion (hence additive increase/multiplicative decrease in the first
    place).

     

    I am curious; the article mentions (without describing) fast retransmit, but
    the textbook claims that it was added later in response to a later problem
    (dead time waiting for a timers). It seems that Van Jacobson had already
    thought of this, though without an explanation in the paper, it is
    impossible to know if he envisioned it in the form the book explains.

     


  • Next message: Jenny Liu: "review of "Congestion Avoidance and Control""

    This archive was generated by hypermail 2.1.6 : Sat Oct 16 2004 - 23:32:09 PDT