CSE-561 Notes (10/30/02)

Probability Distributions:

On the scale and performance of Cooperative Web Proxy Caching

1. Results of the paper: 2. The question whether to install a proxy cache or not is up to the ISP

3. How did they collect the necessary data?

Basically they did passive monitoring of routers. (Neil pointed out that this was quite difficult, e.g. in case of fragmented IP headers you have to collect the fragments, sometimes you have multiple flows for one request, etc.)

4. Using the URL to count possible cache hits (or to count the times that pages were requested) could be insufficient due to mirrors etc. => they used these numbers just to calculate an upper bound on the possibility of requests being cached.

5. How can you do peer-to-peer web caching?

You can just intercept peer-to-peer traffic using a proxy. This could also serve other goals, for instance finding a better source for the requested data

6. After these discussions we turned to the graphs of the paper (asking the question what we can conclude from the graphs): 7. Conclusion: Probably the best way to conclude this is to consider the effect that this paper had being a very strong negative result: It basically shut down a whole area of research.

Revealing ISP topologies using Rocketfuel

Why don't ISPs reveal their topologies: Concerns while mapping the ISP topologies: Issues with Validation: Discussions about figures in the paper: