From: Ethan Katz-Bassett (ethan@cs.washington.edu)
Date: Wed Dec 01 2004 - 01:31:02 PST
In this paper, the authors present mechanisms for limiting the service
degradation that can occur on the Internet under high bandwidth aggregates.
They define an aggregate to be a subset of traffic sharing some property.
The two main types of aggregates they discuss are DoS attacks and flash
crowds. Each causes heavy traffic and can result in high loss rates. In
both cases, the traffic is not due to a single heavy flow nor to overall
high levels, so traditional congestion control mechanisms are not the idea
solution. Other DoS work I have seen focuses exclusively on DoS and usually
looks stopping the sender. I liked that this paper fit DoS into a larger
problem and looked at managing the flow.
The paper proposes a two part solution. In the first part, local
aggregate-based congestion control (Local ACC) identifies high bandwidth
aggregates at a given router and limits their throughput to leave bandwidth
for other flows. I may have missed the details, but it looks like the
algorithm limits each identified aggregate to the same level; it is not
clear to me that this is the correct choice (vs perhaps proportionally).
The second part, pushback, lets router tell its upstream neighbors to limit
aggregates the router found. This should open up bandwidth for better
usage, as packets from the aggregate are dropped earlier than they would be
with just Local ACC. Also, it should limit the number of "innocent" packets
labeled as being part of the aggregate.
As the authors mention, the paper represents only the start of a solution.
It seems like an interesting problem and a reasonable approach. I am
curious how much of a problem DoS and flash aggregates currently represent
in the Internet. It remains to be shown how these mechanisms would perform
in more realistic simulations.
On a side note, I like that they included a link to their simulation
scripts. I am unsure how common this practice is, but it is nice, making
results easier to verify and reproduce. I previously worked with botanists
to develop a common way to encode their procedures in an easily reproducible
way. The lack of such a system kept researchers from being able to verify
others' work. Specifically, the raw data was often manipulated in vague
ways before the experiment could be performed, and these manipulations were
not standardized.
This archive was generated by hypermail 2.1.6 : Wed Dec 01 2004 - 01:31:08 PST