Review of "Congestion Control for High Bandwidth-Delay Product Networks"

From: Ethan Katz-Bassett (ethan@cs.washington.edu)
Date: Wed Oct 20 2004 - 00:52:03 PDT

  • Next message: Daniel Lowd: "XCP paper"

    In this paper, the authors write that TCP performs badly as delay and
    bandwidth increase, and they present their XCP protocol as an alternative.
    Rather than trying to adapt TCP to improve performance under these
    conditions, they start from scratch and design a new congestion control
    architecture. They use control theory to guide their design. One
    interesting aspect of XCP is its decoupling of bandwidth utilization control
    from fairness control. As we read in the Van Jacobson paper and the book,
    TCP addresses these problems with the same mechanisms. By decoupling them,
    XCP allows one policy to change independent of the other. Under their
    simulations, XCP performed admirably. They provide ways in which XCP could
    gradually be deployed.

     

    XCP does not attempt an end-to-end solution. Just looking at the relative
    size of the sections on XCP senders, receivers, and routers makes it clear
    that most of the work falls on the routers. They perform calculations and
    provide feedback to senders. The end-to-end approach might argue that XCP
    cannot provide a guaranteed solution, and, certainly, it appears that a
    misbehaving router could wreak havoc. Furthermore, it seems that, if the
    protocol is ever provoked into a state in which packets do start to drop, it
    might be in trouble, since the control information must go from the
    bottleneck router to the receiver, back through the network to the sender in
    order for the sender to adjust its window.

     

    One thing that was not clear to me after reading the paper and looking at
    their example topographies is what happens when many flows share a link that
    is not the bottleneck, and some of these flows share a subsequent link that
    is a bottleneck. I may be reading it incorrectly, but it seems like the
    protocol would try to evenly share the excess capacity on the non-bottleneck
    link between all flows, the flows that later traverse the bottleneck would
    not increase their flow over the non-bottleneck, the other flows would not
    be able to pickup this excess capacity past their "fair" share, and so the
    non-bottleneck would not take as much traffic as it could.

     

    Their argument that agents at the edges could monitor for misbehaving
    sources seems incomplete. The monitoring seems to rely on the header value
    for RTT and for congestion window, both of which the source sets and could
    presumably fake.

     

    The paper presents an interesting alternative to TCP. It takes a different
    approach to congestion control, seems to perform well, and could circumvent
    problems that will hit TCP as bandwidth increase.

     


  • Next message: Daniel Lowd: "XCP paper"

    This archive was generated by hypermail 2.1.6 : Wed Oct 20 2004 - 00:52:08 PDT