Outline for 1/9/98

  • Last time: Talked about interrupts, user/kernel modes, system calls, architectural support for OS, introduced processes.

  • Administrative Details:

  • TODO: Programming Project 1 is available on the web, due Jan. 23rd (only 2 weeks)

  • Photo opportunities (another try)

  • Lectures on the web (check before class)

  • Objective: more on Processes - mechanisms, define threads and critical sections.


    Process Abstraction

  • Unit of scheduling

  • One (or more*) sequential threads of control

  • program counter, register values, call stack

  • Unit of resource allocation

  • address space (code and data), open files

  • sometimes called tasks or jobs

  • Operations on Processes: fork (clone-style creation), wait (parent on child), exit (self-termination), signal, kill


    Why Use Processes?

  • To capture naturally concurrent activities within the structure of the programmed system.

  • To gain speedup by overlapping activities or exploiting parallel hardware.


    Process Mechanisms

  • PCB data structure in kernel memory represents a process (allocated on process creation, deallocated on termination).

  • PCBs reside on various state queues (including a different queue for each "cause" of waiting) reflecting the process's state.

  • As a process executes, the OS moves its PCB from queue to queue (e.g. from the "waiting on I/O" queue to the "ready to run" queue).


    PCBs & Queues


    Threads and Processes

  • Decouple the resource allocation aspect from the control aspect

  • Thread abstraction - defines a single sequential instruction stream (PC, stack, register values)

  • Process - the resource context serving as a "container" for one or more threads (shared address space)

  • Kernel threads - unit of scheduling (kernel-supported thread operations -> still slow)


    User-Level Threads

  • To avoid the performance penalty of kernel-supported threads, implement at user level and manage by a run-time system

  • Contained "within" a single kernel entity (process)

  • Invisible to OS (OS schedules their container, not being aware of the threads themselves or their states). Poor scheduling decisions possible.

  • User-level thread operations can be 100x faster than kernel thread operations, but need better integration / cooperation with OS.

    But...


    Nondeterminism

  • What unit of work can be performed without interruption? Indivisible or atomic operations.

  • Interleavings - possible execution sequences of operations drawn from all threads.

  • Race condition - final results depend on ordering and may not be "correct".


    Reasoning about Interleavings

  • On a uniprocessor, the possible execution sequences depend on when context switches can occur

  • Voluntary context switch - the process or thread explicitly yields the CPU (blocking on a system call it makes, invoking a Yield operation).

  • Interrupts or exceptions occurring - an asynchronous handler activated that disrupts the execution flow.

  • Preemptive scheduling - a timer interrupt may cause an involuntary context switch at any point in the code.

  • On multiprocessors, the ordering of operations on shared memory locations is the important factor.


    Critical Sections

  • If a sequence of non-atomic operations must be executed as if it were atomic in order to be correct, then we need to provide a way to constrain the possible interleavings in this critical section of our code.

  • Critical sections are code sequences that contribute to "bad" race conditions.

  • Synchronization needed around such critical sections.

  • Mutual Exclusion - goal is to ensure that critical sections execute atomically w.r.t. related critical sections in other threads or processes.

  • How?


    The Critical Section Problem

    Each process follows this template:

    while (1)

    {...other stuff...

    enter_region( );

    critical section

    exit_region( );

    }

    The problem is to define enter_region and exit_region to ensure mutual exclusion with some degree of fairness.


    Implementation Options for Mutual Exclusion

  • Disable Interrupts

  • Busywaiting solutions - spinlocks

  • execute a tight loop if critical section is busy

  • benefits from specialized atomic (read-mod-write) instructions

  • Blocking synchronization

  • sleep (enqueued on wait queue) while C.S. is busy

    Synchronization primitives (abstractions, such as locks) which are provided by a system may be implemented with some combination of these techniques.