#1

(a) Purely sequential execution of A followed by B:
 
Time 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
A C C C I O C C C I O C C C I O
B C C C I O C C C I O C C C I O

    CPU utilization: Units CPU Active / Total_units = 18 / 30 = 60%

    I./O device utilization: Units I/O Active / Total_units = 12 / 30 = 40%

    Average turnaround time: (15 + 30) / 2 = 22.5

    Throughput: 2 / 30 = 1 / 15 = 0.067

(b) Time-shared execution using round-robin scheduling:
 
Time 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
A C C C I O C C C I O C C C I O
B C C C I O C C C I O C C C I O

    CPU utilization: Units CPU Active / Total_units = 18 / 21 = 85.7%

    I./O device utilization: Units I/O Active / Total_units = 12 / 21 = 57.1%

    Average turnaround time: (19 + 21) / 2 = 20

    Throughput: 2 / 21 = 0.095
 

#2

procedure merge-sort(low, high)
    begin
        if (low < high) then
            begin
                mid = (low + high) / 2;
                p = fork merge-sort(mid+1, high); //fork p to merge second half
                merge-sort(low, mid); //current thread merging first half
                join p; //wait for second half finish merge
                merge(low, high); //now merge the two halves
                quit;
            end
        else return;
    end

Note: Obviously an alternative (and equivalent) solution would be to fork another thread merging the first half while the current thread is processing the second half, wait for the first half to finish, and then merge the two together. In this solution, we would have:

    p = fork merge-sort(low, mid);
    merge-sort(mid+1, high);

A second alternative would be to fork two threads and later merge the whole array:

procedure merge-sort(low, high)
    begin
        if (low < high) then
        begin
            mid = (low + high) / 2;
            p1 = fork merge-sort(low, mid); //merging first half
            p2 = fork merge-sort(mid+1, high); //merging second half
            join p1;
            join p2;
            merge(low, high); //now merge the two halves
            quit;
        end
        else return;
    end

This alternative can be shown to be correct, although it forks off twice as many threads as the solution and can be argued to be slower.
 

#3

Mutual exclusion is not satisfied. Consider the following execution sequence:

flag[0] = true;
turn = 0;
T1 tests (while (flag[1] and turn == 1)), which is false;
//T1 enters critical section
flag[0] = true;
turn = 1;
T1 tests (while (flag[0] and turn == 0)), which is false;
//T2 also enters critical section

Progress is satisfied (no deadlock). The reason is that for deadlock to take place, Pi and Pj must both be spinning, that is, both processes are evaluating the while statement and neither can enter critical section. Then the following statements all evaluate to true: flag[i], flag[j], (turn = 0), (turn = 1), but turn must be either 0 or 1, which means that one of the while statements must evaluate to true and allow to Pi or Pj enter its critical section.

Finally, bounded waiting is not satisfied (starvation could happen). Saying Pi is the faster process and enters its critical section while Pj is still spinning, continuously executing the while statement. Then as soon as Pi leaves its critical section, it will set flag[i] to false but quickly reset it to true again, allowing itself to reenter the critical section.
 

#4

lock: boolean;
Initially lock = false;

repeat
key_i = true;
while CAS(key[i], lock) do no-op;
//enters critical section
lock = false;
//remainder section
until false;

Note: lock is global whereas key_i are local (private) to process i. Also register R is saved as part of the process state on a context switch.