Review #8: Explicit allocation of best-effort packet delivery service

From: Rosalia Tungaraza (rltungar@u.washington.edu)
Date: Wed Oct 27 2004 - 00:12:01 PDT

  • Next message: Kevin Wampler: "Clark and Fang review"

    This paper focuses on modifying the "best effort" packet delivery service
    that the Internet offers to better assure users (who are willing to pay
    the price) that their packets will be delivered. The method proposed by
    the authors is based on marking packets as either in or out depending on
    whether or not the user has paid for the guaranteed service. The former
    user has his/her packets placed in a specific profile, from which every
    packet has a very high chance of getting delivered to its destination in
    time of congestion. A packet marked "out" on the other hand, has a high
    probability of being dropped during a congestion. I use the words "high
    chance" for packets marked "in" because in cases where the routers are
    flooded with "in" packets and there is congestion, some of those packets
    will have to be dropped despite the fact that they are "in" packets.

    Profile meters placed at the edges of the networks monitor which packets
    get dropped or propagated to the next network/host. The location of these
    meters depends on whether the network has adapted a receiver-based,
    sender-based, or a combination of those two mechanisms. Moreover, the
    authors propose the use of the RIO algorithm to determine packets to be
    dropped. The RIO algorithm is a modification of the RED algorithm in such
    a way that the in and out packets are given different probabilities of
    being dropped. Out packets have a much higher probability compared to in
    packets.

    One success of this paper and thus the algorithm is that the algorithm
    allows users to choose what priority they can send their packets with a
    relatively high guarantee that their high priority packets will reach
    their destinations. This is not the case for the "best effort" scheme
    where the Internet tries its best to transport a user's packet but can
    never provide any assurance of whether the packet will actually be
    delivered to its destination. Moreover, unlike the current Internet, this
    algorithm provides a means of dealing with users that do not respond to
    warnings about reducing the number of packets they are sending. For such
    users, their packets are among the "out" packets and thus are continually
    (preferentially) dropped during congestion.

    One idea I would have liked them to expound is how cost-effective they
    think it would be to change the current network to suit their proposed
    algorithm. From my understanding of the paper, the major changes are
    proposed to be only in major routers within ISPs. However, I wonder about
    the various software/hardware changes that need to take place among
    hosts/users. Would the present Internet be likely to adapt to this
    algorithm?

    As the authors suggest, one possible future work would be to implement and
    test this algorithm with a real testbed.


  • Next message: Kevin Wampler: "Clark and Fang review"

    This archive was generated by hypermail 2.1.6 : Wed Oct 27 2004 - 00:12:03 PDT