Jim Shearer's Review of "Lottery Scheduling"

From: Shearer, James E (james.e.shearer_at_boeing.com)
Date: Wed Jan 28 2004 - 14:15:04 PST

  • Next message: Honghai Liu: "Review of Lottery Scheduling"

    Lottery Scheduling: Flexible Proportional-Share Resource Management"
    presents a probability based mechanism for sharing resources among
    prioritized users which avoids many of the complexities and pitfalls of
    deterministic prioritization mechanisms. Critical concepts include:

    > The ability to dynamically adjust the priority of a task based on
    its current role.

    > The ability to control service rates.

    > Immunity to starvation of low priority users.

    > Modular introduction of new users and resources into the
    prioritization scheme.

    > Priority transfer between, for example, clients and servers.

     

    Lottery Scheduling replaces the traditional hard prioritization of users
    with the idea of allocating "lottery tickets" in numbers proportional to
    the priority of the different users. The total number of lottery
    tickets is unimportent, just the allocation ratios matter. The actual
    "next" user of a resource is selected by drawing a ticket at random from
    the pool of allocated tickets. The result of which is that the next
    user is not necessarily the one that has been waiting the longest
    (unlike first-in first-out) or the one with the highest priority (which
    starves lower priority users), but is more likely to be a user with more
    tickets allocated. This sort of weighted probabilistic servicing has
    been used successfully in other resource allocation problems (see, for
    example, "Simple Scaleable Network Algorithms" presented by Balaji
    Prabhakar at one of last quarter's colloquia sessions) so it's not
    surprising that it can be made to work well here.

     

    One effect of treating priority as a collection of lottery tickets owned
    by the user is that the tickets themselves can be used as a sort of
    temporal "capability" for accessing the resource and as such can be
    passed from one user to another. This paper explored several variations
    on this theme including: (1) All of the users backed up behind a user
    holding a mutex can grant their tickets to that user for the duration of
    the mutex hold, thus expediting getting that user out of the way. (2) A
    client can give its tickets to a server for the duration of a call to
    expedite the server. (3) A client that releases a resource early can be
    "compensated" by allocating it more tickets, thus allowing short
    duration users to access a resource more often. The paper also
    discussed differentiating tickets of different "currencies" to isolate
    loads on one resource from loads on another.

     

    The paper also briefly discussed a "reverse lottery" as a means of
    freeing up one of n equivalent resources. The paper used the example of
    a physical memory page, but I can see where it could also be applied to
    kernel threads allocated to processes under a user thread m to n mapping
    scheme.

     

    The authors found a lottery scheduling scheme can be implemented for
    about the same resource cost as a conventional priority scheduling
    scheme, but I suspect that if the full capabilities were invoked it
    would be much cheaper than a deterministic mechanism of equivalent
    power.

     


  • Next message: Honghai Liu: "Review of Lottery Scheduling"

    This archive was generated by hypermail 2.1.6 : Wed Jan 28 2004 - 14:16:18 PST