CSE 451

Winter 2000

Homework assignment 2 solutions

 

4.4

Threads have the advantages that:

 

Threads have the disadvantage that:

 

An application that would benefit from using threads is a web server. Each request is independent, but by running inside a single process it can share an in-memory cache of web pages. A word processor would also benefit, because one thread could handle the input and output while another thread performs spell checking. Because both threads are in the same process, it is easy for both threads to access the document data.

 

An application that would not benefit from using threads is a programming calculating the 10 millionth digit of PI. The calculation is sequential, so breaking it up into threads doesn’t provide any benefit.

4.5

When a thread is created, memory is used to create a new stack, and memory is also used to create a new thread control block. When a process is created, the stack and control block are created, but in addition, a new page table is allocated to hold the virtual to physical address mapping, a new file handle table is created to manage the set of open files, and a new heap is created for dynamically allocating objects.

 

4.6

  1. To switch between threads, the processor must change the registers. This includes the stack pointer, the program counter, and all the user registers that hold program state. In addition, the system must move the old thread into a queue, and take the new thread out of a queue.
  2. To switch between processes, the processor has to perform all the above operations as well as changing the page table. The processor actually stores this in a register as well, so yet another register is changed. On some processors, the caches have to be flushed so that the process doesn’t read cached data from the previously executing process. The operating system also has to move the old process into a queue, and move the new process off of a queue.

4.7

User-level threads are implemented by a library of user level code. As a result, the scheduling of the threads is under the control of the user program. In addition, synchronizing between threads or context switching between threads is performed by user-level code. As a result, these operations are all much faster than for kernel-supported threads. However, user-level threads are not visible to the kernel, so the kernel scheduler may switch out a thread while it is holding a lock, causing other threads to block. In addition, if a single user-level thread blocks for an i/o operation, the kernel won’t allow it to context switch to another thread to allow multi-programming. In addition, the kernel may not schedule any more resources for a process with many user-level threads than it does for a process with few user-level threads. This may be either a benefit or a problem, depending on your view.

 

Kernel threads have the advantage that the operating system is aware of them, so they can be used for blocking system calls. In addition, they can be used to synchronize between processes for the same reason. The kernel scheduler is also aware of the threads, so it may be able to more efficiently share the processors between all the threads in the system.

6.1

Busy Waiting is the term that describes performing a short loop of code that waits for a condition to occur, such as continuously testing the value of a variable. Another kind of waiting is Blocking, in which the thread is suspended while it is waiting, and woken up by the scheduler when the condition occurs. Busy waiting cannot be removed altogether on a multi-processor, because at the very least the data structures used for blocking need to have synchronized access, so that access requires busy waiting. On a uni-processor, though, busy-waiting is not necessary because only one piece of code executes at a time, so by disabling interrupts it is assured that it won’t be interrupted. Since no other code can execute at this point, it is impossible for anybody to be busy waiting.

6.6

The Wait and Signal operations shown on page 167 are as follows, in both C and assembly code:

Wait:                                                                      Signal:

while (S <= 0 ) ;                                                     S = S + 1;

S = S – 1;

 

L1:           cmp S,0

ble L1

 

ld r1, S                                                    ls r1, S

dec r1                                                     inc r1

st S, r1                                                    st S, r1

 

Suppose that  while doing a wait, the signal operation happens in the middle

 

Instruction executed:                                               Value of S in memory

cmp S,0                                                                                  1

--- context switch to signal ---

ld r1, S                                                                                    1

--- context switch to wait ---

ble L1                                                                                     1

ld r1, S                                                                                    1

dec r1                                                                                     1

st S, r1                                                                                    1

--- context switch to signal ---

inc r1

st S, R1                                                                                   2

 

The result is as if the wait never happened. As a result, if another thread starts to wait, it will succeed right away and violate mutual exclusion.