Review: Lottery Scheduling

From: Raz Mathias (razvanma_at_exchange.microsoft.com)
Date: Wed Jan 28 2004 - 16:24:12 PST

  • Next message: Prasanna Kumar Jayapal: "Review of the Lotery scheduling paper..."

    This paper introduced an interesting new mechanism for scheduling
    resources using a lottery metaphor.

     

    The paper began by claiming that our current use of priorities for
    resource sharing is inadequate because setting priority values is often
    arbitrary and "at best a black art." The mechanism introduced by the
    paper involves the idea of a lottery ticket. Everyone competing for a
    resource owns a number of these tickets. When the resource is doled out
    (e.g. the processor is scheduled) a lottery is held and the owner of the
    winning ticket gets to use that resource for a particular length of
    time. The paper then illustrates how in a processor scheduling system,
    the number of tickets owned is proportional to the throughput attainable
    and inversely proportional to the response time.

     

    A few useful extensions to the core idea are also introduced. The first
    is the ability for a process to temporarily transfer its tickets to
    another process; the example application given is the RPC call. Next,
    ticket inflation is given as a mechanism for a client to temporarily
    boost its share of the given resource. I agree with the authors of this
    paper that this idea may be a dangerous one in terms of system
    protection. The kernel of a system should monitor the number of shares
    requested for a resource and the amount of time for which this boost is
    effective. Another extension is the concept of ticket currencies, an
    idea which involves converting the underlying system tickets into higher
    level ticket currencies for the purpose of creating a hierarchical
    organizations of resource sharing. Finally, the list of extensions
    includes "compensation tickets," which is a mechanism for basically
    insuring that resource allocation is fair in the case where an entity
    gives up a resource prematurely.

     

    The implementation section gave various alternatives for implementing
    the resources allocation mechanism, but I think they missed the simplest
    and fastest possible approach (albeit the one that would consume the
    largest amount of memory). The idea would simply be to create an array
    of tickets and then to index into the array using a random number to
    find the lottery winner; the runtime efficiency of the algorithm would
    be O(1) for each lookup.

     

    The novel applications made possible by this paper, I think, were the
    most interesting. Being able to control the rate of computation
    dynamically is something that is not easily achievable in an intuitive
    way in today's systems. This paper also makes possible
    quality-of-service applications in a very dynamic, organized way. It
    seems to be particularly useful in Grid computing, for example, where a
    corporation may want to farm out 1% of the processing power of every
    machine it owns to company A (who's paying 1 unit of currency for the
    service), and 5% of the CPU power to company B (who's paying 5 units of
    currency). In this respect, I believe that Grid computing will not
    truly take off until such algorithms that can make various resource
    guarantees to customers become prevalent in modern operating systems.

     


  • Next message: Prasanna Kumar Jayapal: "Review of the Lotery scheduling paper..."

    This archive was generated by hypermail 2.1.6 : Wed Jan 28 2004 - 17:21:39 PST