Review of Supporting Real-Time Applications in an Integrated Services Packet Network: Architecture and Mechanisms

From: Alan L. Liu (aliu@cs.washington.edu)
Date: Wed Oct 27 2004 - 01:26:43 PDT

  • Next message: Jonas Lindberg: "Review of "Supporting Real-Time Applications in an Integrated Services Packet Network: Architecture and Mechanism""

    The paper presents a way for routers to give some quality of service
    guarantees to flows. This is needed for applications with real-time
    requirements.

    The paper does a good job of describing the axes along which real-time
    flows can be characterized. While a virtual circuit approach might be
    good for real-time flows where loss is intolerable, but overkill for
    flows where some loss is acceptable. Along another axes, rigid
    applications demand a fixed bound on delay, while adaptive applications
    can adjust to network conditions. So with that in mind, they develop a
    way to support service commitments to handle these cases.
            One problem with the paper that I had while reading it was that they
    spent an inordinate amount of time setting up the problem, which by the
    time I finished reading, left me near mental exhaustion and unable to
    completely grasp what should be the meat of the paper, the architecture
    and mechanism. Especially strange is that they spend so much time
    discussing weighted fair queuing when it isn't even their own work.
    Wouldn't it be better to just cut to the chase and explain what WFQ
    gives us in terms of service? Spending time talking about why FIFO is
    bad is like beating on a dead horse. It would have been better to just
    go ahead and explain FIFO+, which is such a simple tweak on FIFO that it
    merits only one paragraph on how it works.
            A strange design choice that the paper makes is in where conformance to
    ensure isolation (making sure bad apples don't ruin the service for
    well-behaved clients) is done only at the first gateway. Given the
    distributed nature of Internet administration, is it reasonable to
    assume all entry routers are going to do a good job of this? Not only
    does that seem unreasonable, but it doesn't provide an adoption path for
    networks to transition from a world with no QoS to the one envisioned in
    the paper.

    The conclusion contains a suggestion that it may be more cost-effective
    to simply over-provision bandwidth, rather to make routers and
    applications alike more complicated. Certainly this can only go so far,
    and might not be possible for the Internet core once we reach its
    current limits (which I hear may be quite high due to the massive
    amounts of money poured into infrastructure during the dot-com era).
    However, at that point why should we stick with statistical
    multiplexing, which gives us all these problems? Don't backbones already
    use contention-free multiplexing approaches because they always have
    bits to push around?
            Another issue is whether making the routers smarter is the way to go.
    Already there are other, end-to-end approaches, such as using erasure
    codes, that provide a semblance of soft real-time service. For hard
    real-time, is it really necessary to go through these gyrations rather
    than go to a virtual circuit system for them?


  • Next message: Jonas Lindberg: "Review of "Supporting Real-Time Applications in an Integrated Services Packet Network: Architecture and Mechanism""

    This archive was generated by hypermail 2.1.6 : Wed Oct 27 2004 - 01:26:45 PDT