From: Chuck Reeves (creeves@cs.washington.edu)
Date: Mon Oct 11 2004 - 07:40:22 PDT
The paper, "A Digital Fountain Approach to Reliable Distribution of Bulk
Data" was written by a number of researchers at University of
California-Berkeley. It discusses the design of an algorithm for reliable
multicast that approximates the digital fountain. The idea is to continually
broadcast a stream of packets each containing a portion of the data that is
to be distributed. This stream is continually multicast or broadcast while
any listeners have registered interest (means not discussed here). Each
packet also contains some measure of redundant data (called erasure codes)
which allows them to reconstruct missing portions of the broadcast data. In
the ideal, should any k distinct packets be received the data can be
reconstructed in its entirety using a simple system of simultaneous
equations. The bulk of the text is dedicated to introducing a new method for
encoding and decoding using Tornado Codes and showing that the performance
is prohibitively faster than the traditional Reed-Solomon erasure codes. The
cost is that slightly more than k packets (measured to be approximately
1.054) are required to support this enhanced behavior. I thought the paper
was effective in providing a general description of the algorithms and
providing a solid analysis of the value proposition of this approach. It did
assume quite a bit of context regarding the reader's background in error
correction algorithms. I found this assumption somewhat frustrating. I could
see use of this approach for distribution of data in geographically remote
clusters and other applications where data needed to be reliably mirrored
over asymmetric networks.
This archive was generated by hypermail 2.1.6 : Mon Oct 11 2004 - 07:40:34 PDT