From: Alan L. Liu (aliu@cs.washington.edu)
Date: Tue Oct 19 2004 - 22:23:06 PDT
First off, I want to say that the graph captions in Figure 7 make no
sense. Are we talking about electronic networks for moving bits or
underground networks for scurrying rodents? Am I the only one who ever
pays attention to figures in papers?
This paper presents XCP, or what the authors claim TCP would have been
if congestion control were a principal design goal. XCP differs from TCP
in that the routers are smarter and send a range of congestion control
notifications, rather than just a bit (congested/not congested).
The authors argue that TCP has fundamental flaws that prevent
congestion control from being dealt with correctly. For instance, using
packet loss as a measure of congestion is too indirect. The bursty
nature of communication causes TCP to oscillate between overutilization
and underutilization of bandwidth. Using control theory as a guide, XCP
seeks to provide feedback to end-hosts in a manner that handles many
conditions, including the ones TCP has trouble with (high bandwidth
and/or high latency links). Another consideration which I find
interesting is that, since XCP uses explicit congestion feedback,
policing monitors have more knowledge that they can use to identify
uncooperating hosts. One final aspect of this paper is that the authors
also proposed two possible upgrade paths for adopting XCP. The one I
understood (i.e., one not rife with equations) does not even require
additional headers added to packets, which is very good from an
efficiency standpoint.
Unfortunately, there are a lot of places in the paper where the
authors' rationale was unclear. For instance, the paper claims that
routers need not be much more computationally powerful than they are
now. However, I would imagine that there are already super-beefy routers
used to their utmost for those same high bandwidth links where TCP
doesn't cut it and XCP is deemed necessary. It doesn't seem like XCP is
that necessary under normal circumstances on the fringe on the Internet,
because TCP has been "good enough." In that case, is it really better to
have a TCP replacement rather than a different layer optimized for
travel over these fat pipes? What protocols are in use now for those
links? It would be interesting to compare XCP over what I assume would
be a modified TCP running over custom lower layers on those same links.
I didn't understand the validity of the simulation parameters used to
demonstrate XCP's superiority. The most confusing were the distribution
choices for short web-life traffic flows, where the Poisson and Pareto
distributions were used with no justification. Another was simply the
topologies chosen for the simulations, especially the parking lot
topology. Its used seemed random and unenlightening, although perhaps it
has some well known properties that make it useful?
Reading this paper brought up a meta-review issue to mind. The work is
inspired by control theory, which is not a CS specialty and one where
the circle of experts on the subject might have little to no overlap
with the networking population. Some of the paper's statements refer to
control theory for analysis, but I wonder how many people actually
understood it and its possibly novel use in this context. As more
research is inspired by other domains, how can CS ensure the validity of
new work?
On one last note, this paper gave me a headache because the font size
was too small, it was wordy, and the figures were tiny. Perhaps it tried
to cram too much into a limited amount of space, but it was much less of
a pleasure to read than the previous, older papers. If we are to truly
make progress, we should be improving upon the past.
This archive was generated by hypermail 2.1.6 : Tue Oct 19 2004 - 22:23:10 PDT