review of paper 15

From: Shobhit Raj Mathur (shobhit@cs.washington.edu)
Date: Sun Oct 31 2004 - 11:02:03 PST

  • Next message: Erika Rice: "Review 12-1"

    The Revised ARPANET Routing Metric
    ==================================

    This paper proposes a new metric to calculate the link costs for routing in the ARPANET. The previous link metric was averaged packet
    delay, which performed well in low to moderate load conditions but in heavily loaded networks it led to routing instabilities and wasted
    link and processor bandwidth. The revised metric involves giving up the guarantee of shortest-delay paths under light traffic conditions
    for the sake of vastly improved performance under heavy traffic conditions.

    The paper starts by showing how the original and the D-SPF algorithms proved insufficient to solve the routing problems in ARPANET. It
    then proposes a new algorithm called Hop Normalized SPF (HN-SPF). HN-SPF tries to address the shortcomings of D-SPF which has several
    disadvantages in heavy traffic conditions. D-SPF uses average delay as the link metric and tries to route all the traffic through the
    shortest path. The proposed revised link metric has one goal in mind for heavy load conditions "to give the average route a good path
    instead of attempting to give all the routes the best path". Keeping this goal in mind, HN-SPF does not use delay as the link metric,
    but rather it uses a function of delay. The new metric imposes upper and lower bounds on the values that can be reported by lines. The
    metric is constant until the utilization gets above a threshold which also depends on the line type. This makes HN-SPF routing
    reasonably sensitive to queueing, transmission and propagation delays of links at low utilizations and insensitive to propagation and
    queueing delays at high utilizations. The modifications also include mechanisms which limit the value to be reported for a particular
    link by changing too much and also prevent the generation of unnecessary updates when the cost changes by too little.

    Though HN-SPF successfully addresses the shortcomings of D-SPF, it is more of a hack rather than a complete solution. It worked in
    ARPANET because of the limited number of link types (only 2 addressed in this paper) and small size of the network. Choosing the
    normalization parameters for every link type is impossible today. HN-SPF limits the cost of a link to be no greater than 2 additional
    hops in a homogeneous network. There is no justification given for these numbers. The values were chosen to solve the problems which
    D-SPF posed and have no theoretical basis. There is no reason not to believe that HN-SPF would under-perform in some other network
    setup. I believe that HN-SPF would be slow to recover from router failures since it would assign the failed link a cost which is three
    times that of the minimum cost and then routes are shed gradually over time. A more theoretical or mathematical justification is
    required, especially in the equilibrium section.

    Overall, HN-SPF was a viable alternative to D-SPF in the ARPANET. It addressed the problems of routing instability and wasted link and
    processor bandwidth at that time satisfactorily, but is nowhere close to solving the routing problems in today's widely heterogeneous
    and dynamic internet.


  • Next message: Erika Rice: "Review 12-1"

    This archive was generated by hypermail 2.1.6 : Sun Oct 31 2004 - 11:02:03 PST