IPC & User Level Threads
IPC: Signals
- a pre-defined set of events that processes can communicate on
- e.g. SIGINT (ctrl-c), SIGPIPE (broken pipe), SIGKILL (kill signal), SIGSTOP (stop process)
- see the signal man page for the full set of signals
- sending and delivering signals are done through the OS
- sending signals
- syscall
kill(pid, signal_number)
- the kernel may also send signals upon exceptions (SIGSEGV, SIGFPE, SIGILL)
- why? maybe a process can recover from these errors!
- how might we implement this?
- track a pending set of signals in the PCB
- would want some restrictions on who can send signals to whom
- should user A's process be able to terminate user B's process?
- delivering signals
- signal delivery is implicit (no action required from the receiver)
- pending signals are tracked as a set, multiple sends of the same signal result in a single deliver
- a process can choose to block until certain signals are delivered
- allow processes to synchronize on custom events
- a process can also choose to ignore/mask certain signals from being delivered
- can't mask SIGSTOP or SIGKILL!
- how might we implment this?
- when can we deliver the signal?
- what should happen when a signal is delivered?
- OS defines default actions for all signals
- a process can also define custom signal handlers for almost all signals
- what signal shouldn't allow custom handlers?
- where do the custom signal handlers live?
- if a custom handler is defined, run the custom handler, otherwise default action
- where should a custom signal handler run?
- user or kernel mode?
- after running the custom signal handler, kernel needs to unmask this type of signal for future delivery
- once that's done, resume actual execution of the process
User Level Threads
- Kernel Level Threads
- what we've learned and interacted with so far
- managed and scheduled by the kernel
- scheduling always involve a mode switch
- can be scheduled onto any free core
- scheduled according to the scheduling policy of the OS (one policy for all threads)
- User Level Threads
- the main cost of kernel threads are management from the kernel => can we move thread management into the user code?
- lightweight threads managed and scheduled by user code (libraries and/or language runtime)!
- why is it lightweight?
- context switching within the user mode => cost of a function call instead of mode switches
- smaller resource consumption: smaller/adaptive stack size
- why use them?
- lightweight, can create tens of millions of them, better mapped to tasks
- custom scheduling policy
- can enable cooperative scheduling where threads voluntarily yield (less overhead)
- application knows itself the best! more suitable policy
- don't want to de-schedule a thread holding a spinlock
- if kernel doesn't manage user threads, how do they run on the CPU?
- map user threads on top of kernel threads!
- N to 1: N user threads on top of a single kernel thread
- N to M: N user threads on top of M kernel threads
- what happens when a user level thread blocks?