From: Craig M Prince (cmprince@cs.washington.edu)
Date: Mon Oct 11 2004 - 03:05:40 PDT
Reading Review 10-11-2004
-------------------------
Craig Prince
The paper titled "A Digital Fountain Approach to Reliable Distribution of
Bulk Data" provided a very interesting proposal of a type of multicast
network protocol for content delivery. What was interesting about this
particular proposal was that it assumed that content was large static
files, that people would start downloading the content at different times,
and that there could be a large amount of packet loss on the multicast
network at any time.
The basic idea of the proposal was that random packets would constantly be
sent out on the multicast network and people could start collecting these
packets at any time. By using an error correcting code (in this case
Tornado codes), once a certain portion of the packets had been collected
then the entire original file could be reconstructed. This is really a
pretty cool idea since it doesn't really matter what particular packets of
the message are collected so long as they are unique and enough are
collected.
The one thing I was skeptical about the proposal was the use of multicast
networks. This paper drew upon previous work on layered multicast, which
make multicast at least somewhat feasible. This type of multicast
deployment is very difficult to build in practice since much of the
internet does not support multicast. However, almost half of the paper
focuses on deploying their "digital fountain" system on a multicast
network. Perhaps the best application of such a system would be on a
private network where multicast could be enabled and where there would be
a lot of individuals wanting the same large file (perhaps some sort of
internal business network for patch distribution).
Another concern I have with this paper is that their usage model seemed
too limited. How often does it occur that there is a large file that many
people want for a prolonged period? It seems that there would be a great
deal of inefficiency in continually broadcasting a file on a multicast
network if noone is listening. At what point is it better to serve up
files individually? How do we know when everyone who wants a file has
gotten it?
Overall, I was fascinated by some of the ideas presented in the first part
of the paper on the Tornado codes; however, I remain skeptical as to how
practical the proposed system is in reality.
This archive was generated by hypermail 2.1.6 : Mon Oct 11 2004 - 03:05:40 PDT