Reading Review 10-25-2004

From: Craig M Prince (cmprince@cs.washington.edu)
Date: Mon Oct 25 2004 - 05:47:30 PDT

  • Next message: T Scott Saponas: "Review of Analysis and Simulation of a Fair Queuing Algorithm"

    Reading Review 10-25-2004
    -------------------------
    Craig Prince

    The paper titled "Analysis and Simulation of a Fair Queueing Algorithm"
    attempts to address the issue of fairness in gateway routers in light of
    malicious adversaries. The paper correctly recognizes that in current
    implementations of congestion control, all of the congestion control
    mechanisms depend on the end-hosts of the network. Misbehaving end hosts
    can therefore cause congestion control to fail, harming the well-behaving
    hosts. The paper cites three main motivations for changing the queueing
    behavior in gateways -- "fair allocation of bandwidth, lower dealy for
    sources using less than their full share of bandwidth, and protection from
    ill-behaved sources". They propose a new round-robin style flow servicing
    gateway algorithm in order to attain these three goals.

    Their algorithm provides the first goal by tracking different flows and
    routing packets in a round-robin style from each of the flows (there is
    some book-keeping to ensure that flows with large packets do not get more
    bandwidth). Since all flows get serviced in turn, there is no way for one
    flow to get more bandwidth. The second goal is achieved by giving a slight
    preference in the round-robin to flows with smaller total bandwidth. The
    reason we want to do this is to give good latency to interactive
    applications (e.g. telnet sessions), while giving good overall throughput
    to bandwidth intensive applications (e.g. ftp sessions). The final goal is
    achieved because the algorithm tracks flows separately and packets are
    dropped first from flows that are consuming more than their share of the
    bandwidth, so trying to overwhelm a gateway only gets your own packets
    dropped.

    I like the fact that this article did a good job of breaking down the
    problem of fairness into the constituent components and addressed each of
    these in turn. I also liked the fact that the fair queueing algorithm
    provides a certain amount of quality of service implicitly by lowering
    delay for low-bandwidth connections -- since most applications are either
    low-latency/low-bandwidth or high-latency/high-bandwidth. This approach
    was clever.

    I had several issues with this paper. First they claim that their
    algorithm provides protection from malicious end hosts; however, they
    admit that they only provide a partial solution. Since they consider
    different source-destination pairs as different flows, it is easy for a
    malicious user to open many different connections to different hosts
    simultaneously -- starving the rest of the real connections. If they
    wanted to provided a solution for misbehaving hosts then they should have
    given a complete solution.

    The second issue I had with this paper is that in all their simulations
    they allow for the system to "stabilize" before taking measurements. This
    seems unreasonable since real networks are always in flux and there can be
    strange dynamics in such systems. It is difficult to evaluate a new
    protocol/algorithm because of this. I would be more convinced if some of
    the results of the experiments had been conducted with more real network
    traces.

    I think this paper provides insight into the levels of evidence that we
    need to give when proposing new algorithms/protocols. First, there needs
    to be some mathematical basis/evidence behind the algorithm, which we see
    in this paper (although not rigorous). Secondly, there are simulations of
    the protocol. Simulations in general come in two forms, real-network, and
    toy-network simulations. Each of these has its uses and is necessary. The
    "toy-network" simulations are those that are designed to test the
    limits/corner-cases of the algorithm. These are needed so that you can
    show that the algorithm is robust. This paper focuses on these
    toy-networks. Then there are real-network simulations (which this paper
    didn't do) that show that even under the dynamics of a real-network the
    algorithm behaves as expected. All three of these forms of evidence seem
    necessary to a good paper.


  • Next message: T Scott Saponas: "Review of Analysis and Simulation of a Fair Queuing Algorithm"

    This archive was generated by hypermail 2.1.6 : Mon Oct 25 2004 - 05:47:31 PDT