From: Michael J Cafarella (mjc@cs.washington.edu)
Date: Wed Nov 10 2004 - 01:08:13 PST
Receiver-driven Layered Multicast
By McCanne, Jacobson, and Vetterli
Review by Michael Cafarella
CSE561
November 10, 2004
Main result:
The authors describe a problem in delivering multicast streaming media
over congested links. In most network application, the source node uses
congestion feedback to adjust its transmission rate. In media multicast,
we would like to avoid doing this because different receivers will experience
different congestion levels. If the transmitter adjusts bandwidth down
for the most-congested receiver, then other receivers will see low-bitrate
data streams.
The solution is to create an ordered list of media streams from the transmitter.
Subscribing to more than one stream raises the media quality. Receivers
subscribe to as many streams as possible, until they thereby induce
congestion. In this way, the RLM protocol works somewhat like TCP/IP's
probe-and-backoff technique for finding the correct transmission speed. Except,
of course, in RLM the receivers are trying to find the right data rate.
Through a combination of careful timer choices and link experimentation, it's
possible for a single receiver to find a good data rate. However, when multiple
receivers are trying to find a good data rate, the result can be strange
congestion behavior on the links between transmitter and receivers. So,
groups of receivers communicate with each other to perform "shared learning."
Nearby receivers are likely to experience similar congestion from the transmitter,
so it's reasonable for them to share data on congestion experiments.
The start of this paper seems to posit a series of routers in the network that
can carry or drop these "adjustable traffic" streams depending on network load.
Of course, this does not exist; routers simply drop incoming packets when the
buffer is full. But shared learning among nearby receivers seems to create
the same effect. The nearby receivers form something of a virtual router that
allows or disallows the media streams from flowing to further points in the
network. I thought this was very clever, and was one of the most impressive
parts of the paper.
It's a shame that there wasn't more study of the networking primitives that
the RLM system would suggest. I bet there are a solid number of them, which
are left somewhat unexplored.
This paper is still relevant today for two reasons, neither of which have much
to do with multimedia. First, the idea that receivers dictate transmission speed
just as much as transmitters is a helpful one in creating modern solutions
for problems TCP used to solve. Second, the shared learning and resulting "overlay"
network of media recipients might suggest a way around many modern rollout
issues.
This archive was generated by hypermail 2.1.6 : Wed Nov 10 2004 - 01:08:14 PST