Paper summary: The paper formalizes an “ideal” multicast protocol, or a digital-fountain, that serves bulk-data of any size to innumerable autonomous clients while maintaining optimal efficiency in the presence of high packet-drop rates. The paper also presents a new state-of-art approximation of this “ideal” protocol using Torando-codes and provides a rigorous comparison to an existing approximation based on Reed-Solomon codes. Strengths: The paper formally defines an “ideal” multicast protocol for bulk-data which serves as a tool for comparing real-world approximations of this ideal. The paper provides a detailed description of a new approximation implemented using Torando-codes. This approximation in comparison to its, then state-of-the-art, counter-part is scalable with respect to file-size and client-count; it is resilient to high loss rates and burst-errors; and it is more efficient in terms of encoding-decoding speed and minimizing the ratio of distinct packets to total packets received. The paper is successful in providing a near exhaustive comparison of its proposed protocol to the Reed-Solomon code based protocol. The authors provide the respective efficiencies of the two protocols in the face of changing file-size, clients, packet-drop rates, etc. The authors further strengthen their analysis by using trace-data to simulate packet-losses in real-world networks. Their analysis makes a strong case for Torando-codes as the superior choice for real-world scenarios. Limitations and Areas for improvement: - Since a Tornado codes are randomly generated and depend on the number-of-packets (file-size), the client has to obtain the tornado-graph topology for every unique file/file-size. In contrast the Reed-Solomon codes can be deterministic. - The authors do not provide the computational costs of constructing an efficient Tornado-code for a given file-size. Also of relevance is the question of whether or not this process is fully automated which has major implications for digital-fountains that host large repositories. - The decoding-efficiency is partly dependant on the packet-loss pattern hence it’s not a fixed quantity. The authors have not provided a mathematical analysis of this dependency. This makes the overall performance analysis a lot harder. Relevance: The work is quite relevant today as it provides a practical solution that can replace redundant point-to-point connections with a single multicast connection. As the internet increasingly becomes the popular choice of medium for media-distributors, works like this will be essential in managing our limited resources. Future Work: - Dispersity routing: The network capacity is usually computed to be – latency * bandwidth of the “best” route from source to destination. Since digital fountains for bulk-data are not affected by packet-ordering and tend to be robust in the presence of high drop rates – a network can be better utilized by flooding k-best routes from source to destination with tornado packets; Thus increasing the network capacity to: number-of-routes * average-latency * average-bandwidth. - Mirrored data-sources: The protocol could be optimized to increase transfer-speeds in the presence of duplicate/mirrored multicasts. P2P file-sharing applications could similarly speed-up file downloads if several participating machines are hosting the requested file.