Review of "Analysis and Simulation of a Fair Queueing Algorithm

From: Alan L. Liu (aliu@cs.washington.edu)
Date: Mon Oct 25 2004 - 02:03:13 PDT

  • Next message: Jenny Liu: "review of "Analysis and Simulation of a Fair Queueing Algorithm""

            The paper presents a "fair" queuing algorithm for routers that
    approximates bit-by-bit round robin scheduling. The goal is to maximize
    useful bandwidth while not starving any "user," where a user can be
    defined variably (e.g., as source-destination pairs).
            The paper makes an argument about why depending on flow control at end
    hosts is insufficient for fairness. Misbehavior is rewarded while a
    compliant host or one not sending many packets is hindered by first-come
    first-serve scheduling. One of the paper's strengths is in pointing out
    the vulnerabilities to abuse of FCFS and showing how FQ overcomes them.
    For instance, flooding the router with packets actually ends up
    penalizing a host with a broken flow control scheme, because even when
    the router drops the packet, the algorithm counts those towards the
    scheduling of its next packet. Another problem that the algorithm deals
    with is the need to service different packet-sending profiles, such as
    continuous file transfer and occasional telnet usage. In FCFS, telnet
    sessions face starvation in the face of bandwidth hogs. In FQ, telnet
    sessions get prompt service.
            I like how the paper presents several edge cases and shows how the
    different queuing algorithms handle them. The reasoning behind doing
    this is to try to show FQ's flexibility in many different situations,
    rather than try to guess what a typical situation would be. If FQ can be
    demonstrated to be good over a wide range of situations, then it's more
    likely that it is a robust algorithm. Unfortunately, using only edge
    cases can also have a downside, namely that the cases seem too
    artificial. So I feel that the paper could have been more convincing if
    some actual traces could have been used alongside these edge cases.
    Presenting edge cases also raises the question of whether all the edge
    cases are actually accounted for, which is not always apparent.
    Overlooking a case that may happen in reality would be bad.
            One problem with FQ is that it relies heavily on the assumption that a
    "user" can be defined. They even admit that no one definition of a user
    is appropriate in all cases. A source-only definition might penalize
    bigger sources that serve more people, while a source-destination pair
    definition is susceptible to a malicious host inventing destinations in
    order to harm the level of service to other users. The problem is
    fundamental -- can network administrators really hope to define a "user"
    and tell their routers to use that definition, and hope to achieve
    fairness under all circumstances? I suspect not. Another problem is how
    to define and identify users across administrative domains? It seems
    that no matter how smart a router is, what it needs to know about who is
    entitled to what level of service quickly gets out of hand. Therefore,
    it's hard to see how FQ can scale while still providing any guarantees.
            Despite these problems, I feel that the paper was good because it
    brought up many flaws with depending on end-hosts to correctly implement
    flow control. The possibility to exploit these weaknesses is still a
    problem today. Again, the question comes up of whether a datagram model,
    as opposed to a virtual circuit Internet, is the right approach given
    our needs and uses. What is more important, to utilize as much bandwidth
    as possible at all times, or to be able to provide performance guarantees?


  • Next message: Jenny Liu: "review of "Analysis and Simulation of a Fair Queueing Algorithm""

    This archive was generated by hypermail 2.1.6 : Mon Oct 25 2004 - 02:03:23 PDT