From: Pravin Bhat (pravinb@u.washington.edu)
Date: Sun Oct 24 2004 - 21:32:49 PDT
Paper Summary: The paper proposes enhancements to an existing queuing algorithm
originally proposed by Nagle. The enhancements are targeted towards providing
a more fair distribution of bandwidth among competing flows at a router while
compensating low frequency flows with lower latency and protecting well-behaved
flows from malicious users.
Fair Queuing algorithm was designed to fix the major issues with the, then,
queuing algorithm of choice - FCFS. FCFS combines bandwidth, buffer and
latency allocation into a single policy. Though easy to implement FCFS suffers
from placing too much faith in the goodwill of sources to self-regulate their
respective flows. FCFS also suffers from not maintaining any "soft" flow state
which keeps it from giving preferential treatment to deserving flows - i.e.
lower delays for telnet like applications.
Paper Strengths:
Possibly the greatest advantage of Fair Queuing is its ease of deployment.
Routers can implement this queuing algorithm independent of other routers
increasing the probability that eventually every router will get around to
adopting it. In fact a router that implements Fair Queuing effectively blocks
malicious flows from propagating any further into the network.
The paper moves the responsibility of congestion avoidance and fair use of
resources to the routers which are not only more trustworthy but also have an
advantage of actually being in full view of all competing flows and hence can
do a better job of regulating resources than a source operating with a partial
view of the network.
The results presented in the paper are impressive. Fair Queuing disentangles
bandwidth, buffer and delay allocation from each other. Fairness is achieved
independent of packet sizes, arrival time and misbehaving flows. Over time the
fairness measure of the algorithm reaches the desired fairness that could be
achieved by an ideal bit-by-bit round-robin algorithm. It is fairly easy to
implement and computationally inexpensive.
The paper is well written and is supported with a detailed comparison with
other queuing approaches. The authors also provide an adequate theoretical
analysis of the algorithm.
Limitations and Areas for improvement:
The paper dodges an important question - When does one remove an inactive
flow from a router's internal state? This a hard problem to deal with.
Consider telnet applications - these flows can be alive for days. Fair Queuing
tries to provide lower delays for such flows by using their bandwidth usage
history. If FQ throws away a flow from its memory after a fixed interval of
inactivity, any flow with a lower frequency of transmission will not receive
the low delay it deserves.
Like most queuing algorithms, Fair Queuing works on the assumption that once
a connection is established packets are more or less routed through static
routes. If routing algorithms that used variable paths become dominant in the
future each router could have as many flows as there are packets in the queue.
At this point an algorithm that works by maintaining some "soft" flow state
will be ineffective.
For no fault of its own, FQ suffers from the inertia that accompanies the
deployment of any technology over the distributed internet architecture.
The paper was written in 89. 15 years later, the queuing algorithm of choice
still remains - FCFS. I guess most people either dont want to spend money
on more expensive routers or ISPs are simply not willing to take the slight
performance hit that would come from the FQ book-keeping.
Future work and relevance:
It would be interesting to see if the results seen in the simulation actually
hold in real world networks. Obviously a lot of work has to be done in actually
getting everyone to adopt the algorithm.
Despite its antiquity the paper is relevant for the times as the internet
continues to scale at its current rate. Considering the number of DOS attacks
prevalent today and the possibility that technological advances might place
higher bandwidth power at network edges, giving malicious users higher
leverage, the techniques proposed in this paper might be crucial in ensuring
stability of the internet.
This archive was generated by hypermail 2.1.6 : Sun Oct 24 2004 - 21:32:50 PDT