Review of "A Revised ARPANET Routing Metric"

From: Seth Cooper (scooper@cs.washington.edu)
Date: Mon Nov 01 2004 - 00:49:23 PST

  • Next message: Scott Schremmer: "Khanna et al."

            The purpose of this paper is to present a new link cost metric to use
    when calculating routing tables. It is a replacement in the Shortest
    Path First (SPF) algorithm, replace delay-SPF (D-SPF) with
    Hop-Normalized SPF (HN-SPF). D-SPF calculates link costs by averaging
    the delay on a link over an interval, with some lower bound. This
    performs well when a network is lightly loaded, but when it is highly
    loaded, it can lead to oscillation between paths and poor use of network
    bandwidth. Because routing tables become calculated based on delay from
    a previous interval, D-SPF relies on that delay being close to the delay
    in this interval, once all routing tables have been updated. In a
    loaded network, this is not the case.
            One strength of HN-SPF is that it helps to alleviate some of the
    greediness of the SPF algorithm. On a loaded network, instead of trying
    to give every route the best possible path, it attempts to give the
    average route a good path. Some routes will have to take slightly worse
    paths, because if they didn't, every route would end up on a bad,
    heavily loaded path. Routes that can be diverted to slightly worse
    paths can move off the main path first. In this way, HN-SPF begins to
    make a system where what is best for each route is what is best for all
    routes as a whole. This is very desirable property of a system. It
    seems that there may be some kind of Nash Equilibrium appearing here.
            One weakness of HN-SPF is the designation of minimum and maximum link
    costs reported. For instance, the paper mentions that a 56 kb/s link
    would have a minimum cost of 30 units and a maximum cost of 90 units,
    and in general satellite links have higher bounds than terrestrial
    links, so terrestrial links will be preferred. But the exact costs are
    not very well justified. It appears they are more heuristics than
    anything solidly tested.
            This paper is relevant because routing makes up the backbone of the
    Internet. Without routing, and automatic routing algorithms, it would
    be very difficult for packets to get around. It is important to know
    how and why routing tables are updated, what trade offs there have been,
    and what could be improved.


  • Next message: Scott Schremmer: "Khanna et al."

    This archive was generated by hypermail 2.1.6 : Mon Nov 01 2004 - 00:49:28 PST