From: Kate Everitt (kteveritt@yahoo.com)
Date: Sun Nov 07 2004 - 11:21:36 PST
This paper monitors and discusses content delivery
networks by monitoring traffic between UW and the
internet at large. The types of networks under
consideration are HTTP web traffic, Akamai, and
Kazaa/Gnutella. The authors thoroughly recorded and
classified traffic over the period of nine days.
Recording trace data and comparing it to earlier
provides insights into how traffic has changed.
However, the authors didn’t speculate as to what the
trends in data might be, and how they would affect
network resources in the future. The most valuable
discussion point in this paper was the recognition
that a reverse cache would be more useful than a
regular cache system for the University of Washington.
I also found interesting the discussion of the cycles
of traffic, with web traffic peaking in the day and
peer-to-peer at night. This suggests there may be good
resource allocation schemes to share bandwidth between
the two.
From this paper, it seems the authors found the
results much more surprising than I did. In the
beginning, they make the assumption that in
peer-to-peer traffic, peers typically behave as
servers as well as clients. However, in the Kazaa
network and others, it is quite common for people to
become clients only, and share few if any files. Also,
the downloading decisions were based on bandwidth,
which makes it more likely a client will choose a peer
at the University of Washington, with its high
bandwidth connectivity (especially among file sharers,
who are typically home users with slower connections.)
Kazaa peer-to-peer networks have very different
connection speeds, and so it is pretty common for the
faster nodes to serve more content. Also, there are
sociological factors that would discourage users from
sharing files. The authors presented some very
valuable work in looking at web traffic, but I feel
their setup, at the gateway to a major University,
made it difficult to have an accurate perspective on
how peer-to-peer networks in general would scale. If
the university networks actually were overloaded,
others would be less likely to download from them and
I predict the traffic would spread out more evenly
among the peers. This presents and interesting
perspective though: from the University’s point of
view, it may be necessary to limit peer-to-peer
traffic so that it will not interfere with HTTP
requests before the requests slack off.
This type of paper is very useful when looking back at
Internet trends, but it seems to be only one
datapoint, in that the results may become stale quite
quickly. (This may be a general hazard of networking
in particular.) I would have liked to see more
discussion of how to prepare a network for predicted
future use.
__________________________________
Do you Yahoo!?
Check out the new Yahoo! Front Page.
www.yahoo.com
This archive was generated by hypermail 2.1.6 : Sun Nov 07 2004 - 11:21:41 PST