From: Kevin Wampler (wampler@cs.washington.edu)
Date: Mon Oct 18 2004 - 02:32:04 PDT
(sorry this is a bit long, I guess I got carried away)
An outline of the congestion control mechanisms for TCP is given in the
paper "Congestion Avoidance and Control". The paper outlines the reasons
for the various methods which TCP employs to help prevent critical network
congestion, both from a practical and a theoretical perspective.
The actual mechanisms by which TCP works to avoid congestion are relative
simple and well thought out, but (particularly after reading the
corresponding chapters in the textbook) are somewhat old news. What I
found to be particular interesting in the article was the rather detailed
theoretical treatment of these mechanisms. Many of the actual details are
rightly glossed over (such as the proof that exponential backoff is
required), but enough of a trace of the concepts remains to be
interesting, particularly so because it strikes me as uncommon to see
arguments of a theoretical nature (as opposed to more heuristic
algorithms) in a networking paper.
Seeing such a theoretical treatment leads to natural questions about the
assumptions implicit in the model and about the tightness of the
algorithms derived. As the latter is a question in which it is difficult
to say anything without giving a full proof of the optimality of a
solution I will focus on the former.
I am particularly drawn to the snippet of the paper on page 318 which
mentions that exponential delay can be derived from results in linear
system theory. Though I cannot claim to be at all familiar with linear
system theory it seems to be a very interesting way of viewing a network.
Though it is not mentioned in the paper, I would imagine that the
exponential backoff derivation applies even if the network could not be
well approximated by a linear system... at least locally (since fixed
points of nonlinear systems can after all be described by linear systems),
and probably globally provided certain simple constraints on the nature of
the nonlinear system are met.
Possibly in the linear case, but most probably in the nonlinear case
however, exponential backoff may be overkill for handling the problem.
This seems to be inevitable given that linear stability theory cannot
fully account for the stability of nonlinear systems. Unfortunately as
far as I am aware we are currently lacking a full theory for accounting
for the stability of nonlinear systems, and will likely be so far at least
some time if the choice of a the Navier-Stokes equations as a Millennium
Problem is any indication. This does not, however, mean that no progress
can be made on the problem in the mean time.
There is even the possibility of doing more than simply finding more
efficient ways of keeping a nonlinear network system under control. More
to the point, it may be possible to actually use the chaotic behavior of a
network to our advantage. As a preliminary hint that this may be
possible, consider two clients which can connect to each other through a
variety of independent networks, and that routing is automatically
switched between these networks as necessary. If the individual networks
behave as linear systems, then global conditions leading to the collapse
of one network are likely to lead to the collapse of all networks.
If these networks behave as nonlinear systems the situation may be
different. Since nonlinear systems are subject to chaotic behavior
(something linear systems are not) let us imagine that each of the
networks is behaving in a chaotic manner which is not likely to lead to
critical congestion under normal conditions. In this case, it seems
possible to me that the onset of conditions which would lead to the
collapse of one system may not be as likely to lead to the collapse of
other, because a sort of asynchronicity is induced by the chaotic behavior
of the individual networks. Since flow would then eventually be be routed
through these other networks, the congested network would eventually
recuperate.
Thus, while the survivability of individual species extinction by
amplifying local population noise networks be adversely affected by
chaotic behavior, it is possible that the survivability of the overall
internet would actually be enhanced by this very same chaos. This can be
seen as the networking analog of corresponding results in ecology. In
particular Allen et al. in their paper "Chaos reduces species extinction
by amplifying local population noise" (Nature, 1993) have shown that
chaotic behavior in populations loosely coupled by migration can help to
prevent the extinction of the metapopulation (and hence chaotic behavior
is actually *not* necessarily evolutionarily selected against).
This indicates that there may be some benefit to having networks behave
chaotically, and it seems not too unlikely that this benefit extends
beyond the preceding simple illustration. The challenge then, is to find
ways of engineering network and internetwork architectures and/or
communication protocols to better take advantage of chaotic behavior. It
is even possible that some simple constraints upon the way in which
networks are organized, behave, and interact could be sufficient to give
very survivable on-average good performance without the need for explicit
resource allocation in the routers or for congestion control in the
end-to-end protocol (so, for example, a catastrophic failure would be
prevented by containing the problems to a single network and then slowly
routing more traffic to other networks. Thus the primary force for
congestion control would be the slower pace of traffic migration between
networks compared to the traffic fluctuations within a network).
This archive was generated by hypermail 2.1.6 : Mon Oct 18 2004 - 02:32:04 PDT