From: Kevin Wampler (wampler@cs.washington.edu)
Date: Wed Oct 13 2004 - 00:34:59 PDT
The paper "A Digital Fountain Approach to Reliable Distribution of Bulk
Data" has two key ideas that were more noticeable upon its reading. The
first of these was the introduction of the concept of a digital fountain
for data broadcast -- a stream of packets such that the reception of any
subset of size k of this stream is sufficient to reconstruct a file of
size k. The second concept presented is the use of Tornado codes as a
tool to help to approximate this ideal in practice.
The ideal of a digital fountain sets as a goal many conveniences in file
transmission. In particular, it allows users to "tune in" to a stream of
data at any time and receive a file while minimizing the reception of
redundant data, even in the face of high packet loss. The use of Tornado
codes to achieve it also seems to be a piratical improvement over
Reed-Solomon codes or interleaved approaches, as it can be decoded in very
reasonable times with only a small sacrifice in data redundancy.
The concept behind Tornado codes is simple and elegant, and seems to go a
good way toward achieving a digital fountain type system. I, of course,
wonder if there may be methods which can achieve the quick encoding and
decoding times of Tornado codes without incurring the small data
redundancy they require, though in practice an efficiency of 1.05% seems
to be close enough to 1 for piratical purposes. More striking to me id
the requirement of a graph with 222,516 edges in the computation of the
Tornado-Z code. On more computers this would not pose a significant
problem, but perhaps on embedded systems this could be a very real
impediment to actually supporting the Tornado-Z code.
This archive was generated by hypermail 2.1.6 : Wed Oct 13 2004 - 00:34:59 PDT