From: Katie Everitt (everitt@eecs.berkeley.edu)
Date: Mon Oct 11 2004 - 08:00:16 PDT
“A Digital Fountain Approach to Reliable Distribution of Bulk Data” Review
Katherine Everitt
The problem that this paper addresses is how to reliably multicast large
amounts of data to many autonomous clients. The authors do a thorough job of
dsecrbing why previous solutions that don’t work, from unicast (unable to
ack packets due to server implosion), data carousel (takes too long for the
client if you miss a packet) and reed-solomon codes (to much processing
time).
The main idea of the paper is using redundancy in information sent to allow
clients to quickly reconstruct the packets they have missed. The improvement
over Reed-Solomon codes is that they use a sparser system of linear
equations. With fewer number of terms in each equation, fewer XOR
calculations are needed and the processing is faster.
I was curious about how the random graph structure would be used for packets
of varying size. Would the graph always have to be pre-calculated and tested
so that it was efficient? How much time would this take.
I felt the analysis of Tornado codes was very thorough, using both an
idealized and simulated traffic analysis. I would have liked to see more
discussion about Real Time multicast, which doesn’t seem well served by this
scheme. Perhaps a short delay of the entire multicast or distributed
processing would serve to help Tornado codes serve this case.
This archive was generated by hypermail 2.1.6 : Mon Oct 11 2004 - 08:00:20 PDT