From: Masaharu Kobashi (mkbsh@cs.washington.edu)
Date: Wed Oct 20 2004 - 03:35:34 PDT
1. Main result of the paper
The paper proposes a new Internet protocol, XCP, in order to cope with
the expected future trend of increasing high bandwidth and high latency
links. XCP has advantages over TCP in congestion control as well as
many other respects such as decoupling of utilization control from
fairness
control, capability to distinguish error losses from congestion losses
and capability to detect misbehaving sources.
2. Strengths in this paper
The proposed protocol sounds like almost a panacea for the Internet.
According to the authors, XCP has unbelievably many favorable
capabilities,
such as superb congestion control with almost no packet drops and many
favorable properties such as those referred to in the above section.
The paper's strength is not just the empirical superiority of the
proposed
protocol, but it is also based on firm theoretical argument with even
rigorous proofs.
Another strength of the design of the protocol is it has practically
vital characteristics. It is that the protocol fits gradual deployment.
Whatever super protocol was invented, if it required simultaneous overall
deployment throughout the Internet, its value would be very limited.
XCP's TCP-friendly nature and gradual deployability are great properties
for a protocol to be a really viable future protocol of the Internet.
3. Limitations and suggested improvements
Although the design of the protocol is precisely calculated, it has a
weakness. It is that the protocol is based on explicit information
exchange
through the packet header among sources, destinations, and routers.
While such a system can achieve the minute controls as explained in the
paper, it is more vulnerable to malicious or troubled sources and
routers,
since the the decision on the state of congestion is largely dependent on
the information explicitly written in the packet header, which can be
forged or manipulated easily. On the other hand, conventional congestion
control, which is largely dependent on the observation of actual flows
and congestion, are less vulnerable to malicious hosts/routers.
Overall, the proposed precision mechanism is great, but it has this
fragile point.
4. Relevance today and future
It is a great proposal for coping with the expected near future
problem of the Internet. I wonder how it has been received by the
Internet community by now. If it is really as good as the paper
claims, many parties should have seriously considered deployment
of it. If not, maybe the shortcomings I pointed out above can be
one reason.
This archive was generated by hypermail 2.1.6 : Wed Oct 20 2004 - 03:35:34 PDT