From: Chuck Reeves (creeves@windows.microsoft.com)
Date: Sun Oct 24 2004 - 22:39:14 PDT
The paper, "Analysis and Simulation of a Fair Queueing Algorithm" was
written in 1989 by a number of researchers from Xerox PARC. Extending
the previous of work of Nagle (first-come-first-serve FCFS), this paper
presents a fair queueing (FQ) algorithm that provides fair allocation
and is resistant to ill behaved sources. The algorithm
The paper's strengths include its detailed description of the problem
with existing approaches and its consideration for a wide variety of
source behaviors. Specifically section 2.1 clearly identifies specific
weaknesses in FCFS (lack of consideration for packet lengths and
ill-behaved sources) and the simulations in all scenarios of section 4
use a variety of flow control algorithms. Additionally, I thought the
author's were successful in showing through their algorithms and testing
that this approach is effective in acting as a virtual fire-wall against
ill-behaved sources. The graph in figure 1 provides a distinct contrast
between FCFS and FQ.
While the math equations provided in section 2.2 of the paper were
interesting I thought they did a poor job of describing the behavior of
the implemented algorithm. The textual summaries seemed more concrete
and effective e.g.
"Our packet-by-packet transmission algorithm is simply defined by the
rule that whenever a packet finishes transmission, the next packet sent
is the one with the smallest value of F(i,alpha)"
The simulations certainly failed to establish this approach as a perfect
solution. Scenario 4 shows uneven distribution of throughput in the
presence of some older protocols.
This technology has pertinence today in that (if implemented) it
provides a measure of resistance to many denial-of-service attacks,
especially those originating from a specific branch of the network.
This archive was generated by hypermail 2.1.6 : Sun Oct 24 2004 - 22:39:32 PDT