Review #5: Random Early Detection Gateway for Congestion Avoidance

From: Rosalia Tungaraza (rltungar@u.washington.edu)
Date: Mon Oct 18 2004 - 00:00:28 PDT

  • Next message: Wow! Education Discounts: "Academic Discounts on Microsoft, Adobe, & Symantec"

    This paper is about the RED (Random Early Detection) algorithm for
    congestion avoidance by routers in an internetwork. Unlike end-node
    congestion control mechanisms in which the transport protocols (TCP) of an
    Internet controls congestion, RED gives routers the ability to relatively
    prevent congestion. However, this is mainly realized in internetworks
    where such transport protocols respond to "advice" from the routers. In
    short, the routers keep track of the average size (length) of the packet
    queues in their buffer. If the size exceeds a pre-computed minimum, then
    subsequent packets are marked with a specific probability of being
    dropped. Packets with probabilities that exceed a given maximum
    probability are dropped. This serves as a signal to the TCP layer
    responsible for that packet to reduce the amount of packets it is sending
    across that link. In turn, this may prevent a possible congestion within a
    given router as the number of packets it receives is reduced.

    One of the strengths of this paper is the algorithm it presents. I am in
    favor of this paper's algorithm as opposed to mechanisms where congestion
    is controlled by the transport layer. I think the latter mechanism
    has the potential to waste a lot more resources and in the process be more
    detrimental to applications relying on the packets lost than the former. I
    am saying this because the congestion control by TCP relies on loosing
    packets in order to detect any congestion in the network. Since it has no
    way of telling which packets will meet a congested router, it follows that
    any packets (or flow of packets) of any quantity could be lost at any
    time. The RED algorithm on the other hand, gently drops a few packets
    (hopefully from different sources) at a time when congestion hasn't begun.
    This gives the sources of those packets time to adjust the amount of
    subsequent packets and consequently minimize the total packet loss.

    A minor drawback of this paper is that the algorithm seems to have been
    specifically targeted for TCP/IP networks (e.g. the desired response to a
    dropped package (reduce window size) or its need to avoid global
    synchronization). Meaning that some features of this algorithm may not
    work in other systems. One way this could be improved is by modifying this
    algorithm to make it equally effective for congestion prevention in other
    systems.

    As the authors suggest, future research in this area could be focused on
    determining the optimum average queue size for maximizing throughput and
    minimizing delay for various network configurations.


  • Next message: Wow! Education Discounts: "Academic Discounts on Microsoft, Adobe, & Symantec"

    This archive was generated by hypermail 2.1.6 : Mon Oct 18 2004 - 00:00:38 PDT