Chapter 8, Silberschatz and Galvin

    10. a. A paged memory reference requires 400 nanoseconds: 200 to look up the virtual page translation, and 200 to fetch the memory.

    b. If 75% of the page-table references are found in the associative registers, then our effective access time will be .25*200ns + 200ns = 250 ns.

    11. Allowing two entries in the page table to point to the same page frame allows memory-mapped files to thrash the TLB. This enhances the hit rate of the dcache and minimizes pipeline bursts.

    12. Segmentation and paging are combined into one scheme to allow the benefits of each with the drawbacks of neither. The advantages of paging are that you have no external fragmentation, since all pages are the same size. However, since pages are small, page tables are very large. The advantages of segments are that they require little overhead to manage: just a few base/bounds pairs -- but they suffer from external fragmentation because they vary in size. If we allow a few segments with lots of pages per segment, then we don't need such large page tables to manage the pages, but we avoid external fragmentation.

    16. a. 0,430 maps to segment 0's base plus 430, or 219+430 = 649.

    b. 1,10 maps to segment 1's base plus 10, or 2300+10=2310.

    c. 2,500 maps to segment 2's base plus 500, but segment 2 has a length of only 100, so this is an illegal address.

    d. 3,400 maps to segment 3's base plus 400, or 1327+400 = 1727.

    e. 4,112 maps to segment 4's base plus 112, but segment 4 has a length of only 96, so this is an illegal address.

Chapter 9, Silberschatz and Galvin

    5. Effective access time for this system can be computed as follows:

    If the page is in memory, 100ns. This happens with probability R.
    If the page is not in memory and no write is necessary, 8ms. This happens .7 * (1-R) of the time.
    If the page is not in memory and a write is necessary, 20ms. This happens .3 * (1-R) of the time.

    The total effective access time is then R*100ns + .7*(1-R)*8ms + .3*(1-R)*20ms. We want this effective access time to work out to 200ns. We have 100R + 5600(1-R) + 6000(1-R) = 200, or 11400 = 11500R, or R = 99.1%. We can suffer a fault rate of only .9% to maintain this effective access time. 6. a. LRU replacment doesn't suffer from Belady's. It's pretty good. 4.
    b. FIFO replacement can suffer from Belady's anomaly. It's not terribly good but it's sometimes better than random. 2.
    c. Optimal replacement is, well, optimal. 5.
    d. Second-chance replacement can suffer from Belady's but will do so much less often provided that the watermark for the second-chance queue is set properly. 3.

    8. .99 of accesses require no special action and take one cycle = 1 mic. .01 * .8 = .008 of accesses require an "accessing new page" action and take 2 mics. .01 * .2 * .5 = .001 of accesses incur a page fault, but the replaced page is clean, so they take 2 mics + disk page access time. The remaining .01 * .2 * .5 = .001 of accesses incur a page fault and the replaced page is dirty, so they take 2 mics + 2*disk page access time.

    Disk page access time in this case is rotational time plus transfer time. On average, we must wait half a rotation, or 1 min/3000 revolutions = 1 sec/50 revs = 10 ms/.5 rev. Then we must transfer 1000 words * 1 sec/1000000 words = 1 sec/1000 = 1 ms. So disk page access time is 11 ms, or 11000 mics.

    The total effective access time, then, is .99 * 1 mic + .008 * 2 mics + .001 * 11002 mics + .001 * 22002 mics, for a total of 34.01 microseconds.

    9. a. Installing a faster CPU will decrease CPU utilization because a faster CPU will process its instructions more quickly than the old CPU, leaving more idle time.
    b. Installing a bigger paging disk will have no particular effect on the system. It's being used to 97.7% of time capacity, not space capacity.
    c. Increasing the degree of multiprogramming will decrease locality and therefore increase page faults, causing the paging disk to max out and CPU utilization to go down. On the other hand, if the newly introduced programs have small memory footprints and do little I/O, they will help increase CPU utilization. In other words, the effect of increasing multiprogramming depends entirely on the character of the new programs.
    d. Exactly the opposite as part c.
    e. Installing more main memory will decrease the paging activity and therefore increase the degree of multiprogramming.
    f. Installing a faster disk or parallel disks will allow page faults to be handled more quickly and should therefore increase CPU utilization.
    g. Adding prepaging to the page fetch algorithms tries to overlap page fetch time with CPU time, so that the CPU can get to the next reference faster. Since the paging disk is already near capacity, allowing the CPU to page more often is not going to speed up the system very much.
    h. Increasing the page size could have either effect. If the programs running on the system have a lot of spatial locality, then increasing the page size is going to amortize page fault overhead better, increasing system efficiency and CPU utilization. However, if the programs running on the system need only one byte per page, then transferring the extra data on each page fault is going to be a waste of time and slow down the system, decreasing CPU utilization.

    11. Here are the page fault strings for each replacement algorithm and memory size:

           12342156212376321236
    LRU/1  XXXXXXXXXXXXXXXXXXXX (20) 
    LRU/2  XXXXXXXXXX XXXXXX XX (18)
    LRU/3  XXXX XXXXX XXX    XX (14)
    LRU/4  XXXX  XXX  XXX     X (11)
    LRU/5  XXXX  XX   XXX       ( 9)
    LRU/6  XXXX  XX    X        ( 7)
    LRU/7  XXXX  XX    X        ( 7)
    
           12342156212376321236
    FIFO/1 XXXXXXXXXXXXXXXXXXXX (20)
    FIFO/2 XXXXXXXXXX XXXXXX XX (18)
    FIFO/3 XXXX XXXXX XXX XX XX (17)
    FIFO/4 XXXX  XXXX XXX XX X  (14)
    FIFO/5 XXXX  XX XXXX        (10)
    FIFO/6 XXXX  XX    X   XXX  (10)
    FIFO/7 XXXX  XX    X        ( 7)
    
           12342156212376321236
    OPT/1  XXXXXXXXXXXXXXXXXXXX (20)
    OPT/2  XXXX XXX X XXX XX XX (15)
    OPT/3  XXXX  XX   XX  XX  X (11)
    OPT/4  XXXX  XX    X   X    ( 8)
    OPT/5  XXXX  XX    X        ( 7)
    OPT/6  XXXX  XX    X        ( 7)
    OPT/7  XXXX  XX    X        ( 7)
    
    17. .8 * 1 mic + .2 * .9 * 2 mic + .2 * .1 * (1 mic + 20000 mic) = effective memory access time of 401 microseconds.

Extra Question How do we implement counting semaphores in terms of mutexes?
// assume existence of mutex_t and condition_t;
typedef struct semaphore {
    int count;           // count same as always
    mutex_t m;           // corresponds to tas lock
    condition_t c;       // corresponds to queue
} *semaphore_t;

// create a new semaphore initialized to count
semaphore_t sem_create(int count)
{
    semaphore_t sem = (semaphore_t) malloc(sizeof(struct semaphore));
    sem->m = mutex_create();
    sem->c = condition_create();
    sem->count = count;
}

void sem_P(semaphore_t s) {
    mutex_lock(s->m);                    // ensure exclusive access
    while(s->count < 0)                  // while we can't get in,
        condition_wait(s->m, s->c);      //     wait on the CV
    s->count--;                          // now in, decrement count
    mutex_unlock(s->m);
}

void sem_V(semaphore_t s) {
    mutex_lock(s->m);                    // ensure exclusive access
    s->count++;                          // let someone else in
    condition_signal(s->c);              // wake up a sleeper
    mutex_unlock(s->m);
}