From: Andrew Putnam (aputnam@cs.washington.edu)
Date: Wed Nov 10 2004 - 01:32:46 PST
Review of Receiver-Driven Layered Multicast
Steven McCanne and Van Jacobson
Summary: Receiver-driven Layered Multicast (RLM) is proposed as an
efficient way to enable multicast transmission with different speeds
for different receivers. The technique relies on a learning algorithm
for testing network status and finding the optimal possible operating
point.
The paper provided several good ideas to augment the previously
proposed layered multicast delivery mechanism. First, the authors use a
learning algorithm to allow the protocol to adapt to different network
conditions over time. Second, their scheme works without changing any
of the underlying Internet infrastructure. This may be the single most
important property of an innovation in networking for any system that
aims to actually benefit the Internet.
The paper struck me as one that was published before some of the
critical data required to validate and further define the protocol was
available. The ideas in the paper are certainly novel, but the
experimental methodology was weak.
The time scales associated with the experiments seem to indicate that
the algorithm is not particularly effective. For example, even in the
extremely simple network configurations examined, it took 30 seconds
for a receivers to reach optimal network bandwidth usage. If the
optimal level is not particularly high, this leaves users joining the
group up to 30 seconds of unacceptable quality video. This seems an
inordinately long amount of time in terms of usability, and will likely
lead to users giving up before a solid connection is established.
Also, the join experiments took on the order of one second. While this
gives at least a moderately good indication of current network traffic
conditions, it is also likely to be user perceptible when numerous
receivers are all testing the network.
Most frightening is the authors admission and lack of discussion about
the protocols ability to handle bursty network traffic. The authors
simply cite a source that says that even streaming network video is
bursty, then say that they cannot handle bursty traffic. That is the
extent of the discussion. This seems to be a minimum requirement for
any system that hopes to be deployed in the real world.
The theoretical idea of RLM itself is not without its flaws. One of the
key potential problems with RLM is its dependence on all participants
cooperating, especially in periods of high congestion. I argue that
periods of high congestion are exactly when you can expect someone to
cheat and try to obtain excess bandwidth. With this version of RLM,
tricking other users into relinquishing bandwidth is trivial since the
protocol calls for subscribers to trust the information provided by
other subscribers, even though there is no way to validate the
trustworthiness of that data.
I also am not sure that join experiments are the best way to test for
the availability of additional bandwidth. The time scale for these
experiments seems too short for any meaningful conclusions about the
state of the network to be drawn. I understand that the join
experiments should not use excess bandwidth, but it seems that there
should be a better way to test the congestion status of the network
rather than pushing it beyond its limits periodically. Perhaps using
the timing information associated with the multicast flow in
conjunction with the join experiments would provide a more meaningful
measurement for such a short time span.
This was not the first paper proposing layered multicast delivery
scheme. As such, I think the quality of the results should have been
greater before it were published.
This archive was generated by hypermail 2.1.6 : Wed Nov 10 2004 - 01:32:53 PST