congested paper. . .

From: Scott Schremmer (scotths@cs.washington.edu)
Date: Mon Oct 18 2004 - 02:34:12 PDT

  • Next message: Pravin Bhat: "Review-5"

            The original Internet did not have the ability to deal with issues
    of congestion. It became apparent in the mid-80's that under certain
    conditions Internet links would be greatly slowed by issues of congestion.
    This paper attempts to explain and quantify this paper and explain several
    algorithms to deal with this congestion.
            The paper does a good job of describing the "Slow-start"
    algorithm. Use of the received ack's to determine the rate at which to
    send packets general worked, but left open the question of how to choose
    an appropriate rate to begin sending the packets. This slow start
    algorithm deals with this by slowly increasing the maximum number of
    packets sent when each ack is received. This allows the connection to
    slowly get up to speed without putting too much strain on some part of the
    network.
            This paper is hindered by an assumption that many seemed to make
    at the time, the assumption that the end nodes will behave correctly and
    as expected. As denial of service attacks illustrate to us, this
    assumption turned out to be substantially false. Perhaps this initial
    assumption put us in a position today in which it is difficult to deal
    with this issue today.
            The Internet has become a much more complicated network. This
    paper is still valid as it influenced the beginning of congestion control
    and has probably had significant influence on the solutions to the
    congestion control problem use today.


  • Next message: Pravin Bhat: "Review-5"

    This archive was generated by hypermail 2.1.6 : Mon Oct 18 2004 - 02:34:13 PDT