Review of The Revised ARPANET Routing Metric

From: ssaponas (ssaponas@u.washington.edu)
Date: Mon Nov 01 2004 - 07:56:55 PST

  • Next message: Daniel Lowd: "ARPANET routing"

    Review by T. Scott Saponas

     "The Revised ARPANET Routing Metric" presents a new link cost metric to fix the oscillation prone previous approach to routing.

    This paper's revised link metric is no longer delay, but instead a function of delay. Specifically, it is a function that better represents what the delay through a link would look like if more than one packet switched to that route. The effect of this is to shave off traffic from multiple crowded routes to some new route in small increments. This allows the balance of traffic to slowly stabilize such that load is balanced across available routes thus realizing some equilibrium. There are a variety of possible approaches to this normalization method. The method chosen was to impose upper and lower bounds during lighter loads then slowly relax those bounds so heavily loaded routes can slowly shed traffic. The authors evaluated this approach by looking at the round-trip delay in heavily loaded networks using this method versus previous methods. This evaluation method showed a significant improvement. They also verified the correctness of their algorithm by examining the expected equilibrium behavior of their algorithm and found it to makes sense in heavily loaded environments.

    One problem with this approach is it seems that if in a heavily loaded environment a bunch of traffic has very similar characteristics (like a bunch of people at this ISP are downloading some new news video of CNN), that you are still going to get wild oscillations. Because if a bunch of traffic has the same routing delay in terms of their previous hops and is going to the same server once the advertised cost for another link becomes below threshold for one flow it becomes lower for all of those flows. And in the case where there are two routes neither of which has the capacity for all, this mass of traffic will oscillate between the two paths because the new metric won't shed a few flows at a time. Another downside is that it removes some guarantees about shortest paths and minimum cost under medium loads, but it shown that this cost is well offset by the benefits under high loads.

    This paper makes an interesting and relevant observation. Routing algorithms that are simply based on greedy shortest-path assignment won't work well under heavily loaded situations if routers report link cost in terms of what the first packet to switch to that path will experience. Instead we have to come up with clever ways for link cost to be more in terms of what the link will cost if the number of flows switching equals some large number, then lower that number until all paths are absorbing a similar amount of traffic.


  • Next message: Daniel Lowd: "ARPANET routing"

    This archive was generated by hypermail 2.1.6 : Mon Nov 01 2004 - 07:57:04 PST