1. (3 points) Define the difference between preemptive and nonpreemptive scheduling. State why strict nonpreemptive scheduling is unlikely to be used in a computer center. In nonpreemptive scheduling, a process remains in the running state until it explicitly performs an action that will move it to either the ready or waiting scheduling queues (e.g., yield()). In preemptive scheduling, a running process also moves to the ready or waiting scheduling queues by external acts occurring outside the running process (e.g., a timer interrupt). A computer center is mostly like a time-sharing system with multiple users and will probably not use nonpreemptive scheduling because in nonpreemptive schedulers one process can conceivably consume all of the available CPU cycles to the detriment of waiting jobs. 2. (3 points) How does the distinction between monitor mode (i.e., kernel mode) and user mode enable a machine to provide a rudimentary form of protection (security)? When running in user mode, the computer is not allowed to perform hardware operations that can do I/O or alter the overall state of the machine (such as modify physical page mappings). This means that in order to write-out or read-in data a user mode program must have a kernel mode program do the work for it. (Many similar answers were acceptable.) 3. (8 points) Consider the following C routines to implement a simple LIFO (i.e., stack) of MY_RECORDs. typedef struct MYRECORD { MYRECORD *Next; int DataField; } MY_RECORD, *PMY_RECORD; PMY_RECORD ListHead; VOID InitializeStack( ) { ListHead = NULL; } VOID PushNewRecord( PMY_RECORD NewRecord ) { NewRecord->Next = ListHead; ListHead = NewRecord; } *PMY_RECORD PopNewRecord() { *PMY_RECORD NewRecord; NewRecord = ListHead; if (ListHead != NULL) ListHead = NewRecord->Next; return NewRecord; } The preceding functions are not synchronized at all. Rewrite push and pop to be fully synchronized using the following InterlockedCompareExchangePointer function. PVOID InterlockedCompareExchangePointer ( PVOID *Destination, PVOID Exchange, PVOID Comperand ) Recall that the function takes three parameters, a pointer to memory location, a new value, and an old value. If the location contains the old value then the exchange takes place otherwise the operation fails. In either case the function returns the previous value stored at the destination. VOID InterlockedPushNewRecord( PMY_RECORD NewRecord ) { do { NewRecord->Next = ListHead; } while (InterlockedCompareExchangePointer( ListHead, NewRecord, NewRecord->Next) != NewRecord->Next); } *PMY_RECORD InterlockedPopNewRecord() { *PMY_RECORD NewRecord; do { NewRecord = ListHead; } while ((NewRecord != NULL) && (InterlockedCompareExchangePointer( ListHead, NewRecord->Next, NewRecord) != NewRecord)); return NewRecord; } 4. (6 points) Various operating systems use First-come First-served (FCFS) scheduling, Shortest Job First (SJF) scheduling, or Round Robin scheduling. None of these scheduling algorithms is ideal for all situations. For each of the three preceding scheduling algorithms list one major shortcoming and one major advantage. The item can either be related to how it's implemented or in its scheduling behavior. FCFS - Shortcoming - Job starvation possible. Mean response time for jobs is wildly variable. FCFS - Advantage - Super easy to implement. Does not require extra logic to decide which process to execute next. SJF - Shortcoming - Very hard to decide which is really the shortest job or funny error prone heuristics. SJF - Advantage - Average response time is small. Round Robin - Shortcoming - Extra bookkeeping needed to decide which job to schedule and context swap each job. Round Robin - Advantage - Each job slowly consistently makes progress as if running on a slow CPU. 5. (5 points) In Nachos, the implementation of condition variables doesn't guarantee that a condition will hold true for a waiting thread when it is signaled. For example, imagine if one thread, Thread A, is waiting for a loaf of bread to arrive. Another thread, Thread B, drops off a loaf of bread and signals Thread A. Eventually, Thread B releases the condition lock. Thread C can now come in and grab the lock and the loaf of bread before Thread A grabs the lock, leaving Thread A empty handed. To solve this in Nachos, Thread A must keep checking if there is a loaf of bread once awoken. Another way to solve this problem is to use Hoare-style condition variables. This means that the signaling thread (Thread B, in this case) will signal the waiting thread (Thread A, in this case) and will immediately sleep without releasing the condition lock. This effectively transfers the lock to the waiting thread while eliminating the need for the waiting thread to reacquire the lock. In the bread example, this solution guarantees Thread A will get the loaf of bread because Thread C can never come in and grab the lock. In Nachos, a naove (and incorrect) solution is to simply eliminate the portion of Condition::Wait() that requires the waiter to reacquire the lock. Also, we make sure to put the signaler to sleep before exiting either Condition::Signal() or Condition::Broadcast(), and then we have the signaler reacquire the condition lock after waking up. (Note that we did not release the condition lock before sleeping; this allows us to transfer it to the waiting thread.) There are many reasons this solution won't work in Nachos. Briefly state two of them. (The relevant portions of the code are provided on the next page for your convenience.) Many answers were suitable: When a waiting thread continues, the lock will still think it's held by the signaling thread (causing future problems when checking if the lock "IsHeldBy" the current thread). If the signaling thread signals all the threads during a broadcast, those threads may all enter the critical section at once. The signaling thread is never woken up. Many other answers possible...