thread management implications.

From: Ankur Rawat \(Excell Data Corporation\) (a-arawat_at_microsoft.com)
Date: Mon Jan 26 2004 - 15:21:57 PST

  • Next message: Raz Mathias: "Review: Thread Management Alternatives for Shared-Memory Multiprocessors"

    This paper presents different approaches to organizing the data
    structures and locks used in thread management and their impact on
    latency and throughput. It also presents a back-off algorithm that
    eliminates the degradation in performance of busy processors because of
    a spin-waiting processor.

     

    The authors built a thread package and applied 5 different techniques to
    it, achieving gains in latency and throughput. The paper then presents
    analytical results for the alternative approaches.

    The approaches are:

    1. central data structures and a single lock: This approach limits
    throughput because each thread operation is serialized.
    2. central data structures and multiple locks: This approach
    improves throughput but degrades latency
    3. local free list: this avoids tradeoff between latency and
    throughput. It is not practical for stacks.
    4. Idle queue: This is a different approach to attack the problem.
    The tradeoff is between reduced latency, when processors are idle,
    versus increased latency when processors are busy.

     

    The paper also talks about Ethernet style backoff algorithm. Essentially
    in this algorithm a processor that wants to acquire a lock, first
    calculates the probability of its success (which is based on the
    failures to acquire locks in the past) and then decides to make an
    attempt. This algorithm basically reduces contention to acquire a lock
    when processors are spin waiting for a lock.

     

     


  • Next message: Raz Mathias: "Review: Thread Management Alternatives for Shared-Memory Multiprocessors"

    This archive was generated by hypermail 2.1.6 : Mon Jan 26 2004 - 15:22:01 PST