Review of RLM

From: Tyler Robison (trobison@cs.washington.edu)
Date: Wed Nov 10 2004 - 00:27:38 PST

  • Next message: Michael J Cafarella: "Receiver-driven Layered Multicast"

            This paper describes RLM, a multicast protocol that tries to adapt
    to the traffic in the networks along which it is sent; if the multicast
    goes through a congested router, only the lower layers will get through
    (if anything gets through), resulting in lower quality data for receivers
    on the other end. This is sort of a multicast version of expanding to
    fill the available bandwidth, and backing off from congestion, but the
    problem is greatly complicated by the fact that the multicast is been sent
    to numerous receivers. Essentially, the multicast is sent in layers; each
    layer refines the data a bit, so the more layers received, the better the
    quality of the data. The receivers perform join-experiments to try and
    increase their layer subscription, and if it fails (they notice packets
    being dropped during an experimental increase in layer), they drop back to
    the previous layer subscription level and wait (the wait time grows
    exponentially). This allows different levels of quality to be sent out
    over different links, and allows for these levels to change over time as
    the traffic on the network changes.
            The overall concept, adapting to the current traffic, is a good
    idea; it should be able to help avoid congestion while still providing
    quality data to those who are capable of receiving it. Overall it sounds
    decent, and apparently works out in simulation, but there seem to be a
    number of problems nonetheless. First of all, dropping layers restricts
    the data to stuff like video, audio and such; one could not send an
    executable if some of the receivers end up missing a few portions.
    Certainly, video and audio are the more common uses, but this would close
    off other potential applications.
    Second, no real alternatives are presented to compare their adaptation
    algorithm against; exponential back-off is intuitively a good idea, and
    the shared-learning aspect shows that they considered the consequences,
    but the method is pretty simple, and there's nothing to indicate that it's
    the best choice.
            Third, what about security? This is more of a general multicast
    issue than one specific to this paper, but what happens if some malicious
    entity receives a multicast packet and decides to forge its own and pass
    it off as the actual data, causing it to be propagated to all receivers
    beyond a nearby router? I didn't see any security measures in the book or
    paper, and it seems like it could be easy for someone to corrupt the data
    being sent (receivers would end up with 2 versions of the data) and also
    to cause additional congestion via the multicast.
            Finally, the receiver is left with no control over the situation;
    even if they're willing to wait awhile longer while buffering the
    full/high-quality data, then watch/listen to it later, that's not an
    option. Once again, this is an issue with multicast itself, not RLM, but
    it seems relevant here given that layers of the data may be intentionally
    discarded along the way.


  • Next message: Michael J Cafarella: "Receiver-driven Layered Multicast"

    This archive was generated by hypermail 2.1.6 : Wed Nov 10 2004 - 00:27:38 PST