From: Kevin Wampler (wampler@cs.washington.edu)
Date: Sun Oct 24 2004 - 22:43:02 PDT
In "Analysis and Simulation of a Fair Queuing Algorithm" the authors
define and describe the motivations for a fair queuing algorithm in
Internet routers as opposed to the simpler first come first serve queuing
algorithm. Some benchmark simulations of the performance of these queuing
algorithms in conjunction with various congestion control mechanisms.
As the authors describe, the primary motivation to use a fair queuing
algorithm is to prevent certain sources from hogging the available
bandwidth at a router. This is certainly a desirable property for a
router to have, although the actual details can be a bit tricky. For
instance, as the authors mention, one can define a user in various ways,
such as a source, source-destination, a process, or by source/target
routers. The papers choice to define a user as a source/destination pair
seems reasonable, although it does allow for malicious groups of users to
consume the lion's share of the bandwidth (such as in DDOS attacks). It
is not likely that any local queuing algorithm could fully prevent such
attacks without decreasing useful well-intentioned functionality however.
The analysis of the queuing algorithms in this paper is notable in that
they include not just an analysis of the queuing algorithm, but also of
its interaction with some congestion control mechanisms, which is
potentially useful information. It is interesting to see that FQ alone is
not enough to provide good congestion control. This perhaps somewhat
diminishes its usefulness in terms of protection from misbehaving sources,
but probably congestion control is not the job of a queuing algorithm
anyway.
This archive was generated by hypermail 2.1.6 : Sun Oct 24 2004 - 22:43:02 PDT