Monitors
Locks Wrap-up
- Locking granularity
- coarse-grained vs fine-grained locking
- how much data should be protected by a single lock?
- an entire array? an entry of an array? a struct? specific fields of a struct?
- case study: kernel/fs.c
- different inode functions acquire different locks
iget
, idup
, irelease
: icache spinlock
concurrent_readi
, concurrent_writei
, concurrent_stati
: per inode sleeplock
Monitors Basics
- locks: mutual exclusion, synchronize threads so they don't access shared data at the same time
- sometimes we need more than just data synchronization
- parent need to sleep until child exits
- lock waiters sleeps until the lock is free
- event based synchronization requires additional mechanism
- monitors
- synchronization construct that allows threads to block/unblock on events/conditions
- conditions are accessible and updated by multiple threads
- block/unblock requires us to keep track of waiters on each condition
- a monitor = a lock + any number of condition + any number of condition variable
- lock: protects access to conditions and condition variables
- condition: the event we are synchronizing on
- for wait: child's proc_state == ZOMBIE
- for lock_acquire: lock's state == free
- condition variable: a synchronization primitive that manages waiters of a condition
- API: wait, signal, broadcast
- tracks a list of waiters for a condition
- adds a thread to the list and blocks the thread
- removes a thread from the list and unblocks the thread
Monitor Design Pattern
Lock lk;
bool condition = True;
Condvar cv;
void consume() {
lk.acquire();
while (!condition) {
cv.wait(lk);
}
< consumes the condition >
condition = False;
lk.release();
}
void produce() {
lk.acquire();
< enables the condition >
condition = True;
cv.signal();
lk.release();
}
access to condition variables should always be done while holding the lock!
what happens when we block while holding the monitor lock? will the condition ever change?
- no! that's why
wait
actually atomically releases the lock as it blocks
wait
- puts the calling thread on the waiter list, atomically blocks the thread and releases the monitor lock
- why does this need to happen atomically? can we release the lock before calling
wait
?
- must re-acquire the lock when the thread unblocks so it can return from
wait
holding the monitor lock
- spurious wakeups: a thread may wake up and find the condition to be false
- must wait in a while loop!
- MESA vs Hoare condition variables
- MESA: does not guarantee that the thread waking up will acquire the monitor lock next
- Hoare: guarantees that the thread waking up is the next lock holder
signal
- removes a waiter from the waiter list, wakes up the waiter
- this is done while holding the monitor lock!
broadcast
- removes all waiters from the waiter list, wake up all
- why would we want to do this?