Review-5

From: Pravin Bhat (pravinb@u.washington.edu)
Date: Mon Oct 18 2004 - 07:18:10 PDT

  • Next message: T Scott Saponas: "Review of Congestion Avoidance and Control"

    The paper presents a survey of algorithms that were developed in response to
    the 'congestion collapses' that plagued the internet during the late 1980's.

    This paper is an important milestone in the evolution of internet which has had
    to continually adapt to its exponential rate of growth since its conception.
    During the 80's the internet began to see two phenomenon that original TCP/IP
    wasn’t designed to handle:
    1) An large number of simultaneous users
    2) Routing-bottlenecks in the internet-topology which could no-longer be
        maintained as an optimally-connected graph.

    The paper recognized the unforeseen problem, congestion collapses, caused by
    these two phenomenon as an essential problem that had to be dealt with and
    presented several algorithms that had been successful on real-world networks
    in controlling congestion.

    The key contribution of this paper is proposing that TCP should follow the
    principle of 'conservation of packets' - in that it should never flood the
    network with more packets than its capacity even if the receiver could, in
    theory, handle the load. The algorithms it proposes are simply host-centric
    mechanisms to measure the ever-changing network-capacity and respond
    accordingly.

    The paper is very well written. It presents the general motivation and
    a high level overview of the algorithms in the main part of the paper
    for an easy read. On the other hand, the appendices and footnotes,
    provide those interested with various implementation details and
    mathematical analysis of the algorithms.

    The paper presents algorithms that were actually tried on real-world
    networks which is probably why so many of their results are still in
    use today. The algorithms presented in their paper would be counter-intuitive
    for someone starting out fresh without having read the paper; For example
    coupling Multiplicative-Decrease with Additive-Increase, instead of
    Multiplicative-Increase.

    Limitations and Areas for improvement:

    The paper refers to their algorithms as congestion-avoidance mechanisms
    when in fact they are congestion control mechanisms. The techniques
    presented in the paper, with the exception of the future-works section,
    estimate the network-load by every so often causing some congestion.

    The paper presents techniques that are only successful as long as all
    TCP implementations implement them faithfully. These improvements do
    not guard the internet against malicious users, obsolete TCP/IP stacks
    and, most importantly, networking applications that does not use TCP.

    Some of the algorithms presented in the paper are based on the key
    assumption that packet drops due transmission-errors occur with a
    probability of less than 1/window-size. This assumption obviously
    fails in the face of increasing window-sizes and addition of
    error-prone links (i.e. wireless).

    Slow start and multiplicative-decrease can often be too conservative.
    The authors obviously knew about this as they mention the fast-retransmit
    algorithm. I wish the authors had briefly summarized the fast-retransmit
    algorithm or explained the motivation behind it to give the reader an
    incentive to follow-up on the subsequent publication.

    The paper suggests that the gateway side of congestion control should
    be implemented by dropping packets to implicitly signal possible congestion.
    This strategy keeps the network from reaching its optimal capacity by
    unnecessarily dropping packets. An alternative to this approach would be
    to use explicit signals. Use of force, i.e. dropping packets, should be
    used as a last resort.

    Future work:

    Router-centric congestion control: A router-centric approach to this
    problem would ensure fair resource-allocation in addition to congestion
    control and protect against uncooperative hosts

    Congestion-avoidance: Its quite likely that a network that continuously
    runs just below the congestion threshold would out-perform one that
    oscillates around this threshold. More research needs to put into mechanisms
    that avoid congestion instead of simply controlling it.

    High packet-error networks: The assumption that timeouts signal packet
    drops due to congestion and not due to transmission errors does not hold for
    wireless networks. A future area of research would be to develop mechanisms
    that can differentiate between the two kinds of errors.


  • Next message: T Scott Saponas: "Review of Congestion Avoidance and Control"

    This archive was generated by hypermail 2.1.6 : Mon Oct 18 2004 - 07:18:11 PDT