Review of Analysis and Simulation of a Fair Queuing Algorithm

From: T Scott Saponas (ssaponas@cs.washington.edu)
Date: Mon Oct 25 2004 - 07:21:37 PDT

  • Next message: Lillie Kittredge: "fair queueing"

    Review by T. Scott Saponas

     

    "Analysis and Simulation of a Fair Queuing Algorithm" presents a gateway queuing strategy partially based on Nagle's algorithm that is fairer than FCFS in the presence of ill-behaved hosts and more effective in overloaded networks. The algorithm presented is a packet-by-packet transmission scheme that effectively simulates a bit-by-bit round robin approach to servicing flows. The authors show their method allocates bandwidth and controls delay fairly on a source-destination pair basis.

    One of the particularly interesting ideas put forth in this paper is the use of simulation to understand the interplay between different queuing and flow control algorithms. Several simulation results are presented that show their algorithm does better than FCFS no-matter the flow/congestion control methods being used. But their results also show that queuing algorithms alone cannot do that good a job of dealing with congestion. Good flow and congestion control algorithms are still essential to making the network run well where running well is measured by effective use of available bandwidth by all hosts wishing to communicate.

    One of the drawbacks to the described algorithm is allocation is based on source-destination pairs (conversations). This is limiting because a different kind of malicious host than they used in their simulations could send many small burst of packets to many hosts and clog things up. Similarly, hosts can send a bunch of data to a single host (or multiple hosts) and constantly change the source address in its packets to random fake addresses. However, as mentioned by the authors, it is unlikely that such malicious behavior could actually help the malicious host in any productive manner so it is not as important as other considerations.

    In the early years of the Internet, all hosts were trusted and presumed to be running transmission control code that has been released as stable. The reality of today's Internet is there exists malicious hosts who want to use an unfair share of bandwidth and also malicious users who want to create havoc in the network. Also, one cannot always assume the network code running all hosts is the latest and greatest or necessarily correct. This reality makes papers such as this that provide alternative network solutions that work in the face of ill-behaved hosts very relevant.


  • Next message: Lillie Kittredge: "fair queueing"

    This archive was generated by hypermail 2.1.6 : Mon Oct 25 2004 - 07:21:43 PDT