CSE 410 Homework #8 Solutions

Due: Monday 12/10/97

Written Questions

We'd like you to answer the following questions. Please be concise as well as specific in your answers. Show any work for partial credit.
  1. [4 points] Paging. What are two benefits of larger page sizes? What are two benefits of smaller page sizes?

    Large page size:  amortize the cost of disk transfers; smaller page 
    tables; better TLB coverage.  Small page size:  less internal fragmentation,
    better locality (large page sizes may overestimate the locality of your
    program).
    
  2. [ 3 points]Paging. Assume a virtual address space of 2048 pages of 8KB (8192 bytes) each, mapped onto a physical memory of 128 frames. How many bits are there in the virtual address? Sketch the breakdown of a virtual address into virtual page number and offset. How many bits are there in the physical address?

    
    The VAS is 2^24 (2^11 * 2^13) bytes, so each virtual address is 24
    bits (assuming it's byte addressable).  The physical address is 2^20
    bytes (2^7 * 2^13), so each physical address is 20 bits.
    
    |<------ 11 bits VPN ------><---------- 13 bits offset ------->|
    
    
  3. Memory Management. S+G Exercises 8.9, 8.11, 8.12.

    (8.9) [ 3 points] On a system with paging, a process cannot generate a physical 
    address (because all addresses it generates are virtual, and then
    translated to their physical counterparts by the OS), which is the
    only way it could get to another process' memory.  The OS could allow
    two processes to share memory by making page table entries (PTEs) point
    to the same physical frames.  It should only do this when the two processes
    wish to share the data on that page.
    
    (8.11) [3 points] Two sets of virtual addresses will refer to the same physical
    addresses, which means a process will think it has two identical copies
    of the data.  Systems use this trick when copying large amounts of
    data from one place to another:  rather than actually moving the data from
    the source to the destination, just change the page table entries for
    the destination to point at the source.  This works as long as you don't
    intend to modify the copied data, because if you do, you'll also corrupt
    the source data (because they're the same).  To get around this, systems
    sometimes use "copy on write," where the above pointer trick is used until
    the page is written, at which point a real copy is made.
    
    (8.12) [ 2 points] Segments are difficult to lay out in memory, because they are
    of varying sizes (and can possibly grow and shrink dynamically).  By
    combining paging with segmentation (paging the segments), we get the
    "best of both worlds," although we need more complicated data structures.
    
  4. Replacement Policies. S+G Exercises 9.2, 9.11,

    (9.2) [4 points]
    	N = number of distinct pages in the reference string
    	M = number of frames 
    	P = length of reference string
    
    Lower bound: N (we need to bring the pages into memory at least once).
    
    Upper Bound: 
    	When (M >= N), we have plenty of frames and the upper bound is N.
    	When (M < N), we can't fit all the distinct pages into
    	memory, the upper bound is P.  This is because at some point we'll
    	have to replace a page, and we might always pick the wrong
    	page to replace (for example, the very page we want to access next!!).
    
    (9.11) [10 points]
    
    1. Number of frames = 1, 20 faults. (all policies)

    2. Number of frames = 2, LRU=18 faults, FIFO=18 faults, Optimal=15 faults.

    3. Number of frames = 3, LRU=15 faults, FIFO=16 faults, Optimal=11 faults.

    4. Number of frames = 4, LRU=10 faults, FIFO=14 faults, Optimal=8 faults.

    5. Number of frames = 5, LRU=8 faults, FIFO=10 faults, Optimal=7 faults.

    6. Number of frames = 6, LRU=7 faults, FIFO=10 faults, Optimal=7 faults.

    7. Number of frames = 7, 7 faults. (all policies)

  • [ 11 points] Performance. We defined the effective access time of a demand paged system as follows:
    	effective access time = (1-p) * Ma + p * f
    	
    	p  = fault rate (faults/reference)
    	Ma = memory access time
    	f  = fault service time
    
    If we use the cache access time for Ma (assuming that all memory references are cache hits) and assume that translating the virtual address to a physical address is "free" (that is, every virtual address generated hits in the TLB), the above equation will give us a best-case estimate of memory access time.
    1. Assuming a cache access time of 10ns, a fault service time of 10ms, and a fault rate of .0000003, what is the effective access time?
      T = (1-.0000003) * 10ns + .0000003 * 10,000,000ns
        = 13 ns
      
    2. Now assume that we have a 4% miss rate in our cache, where each miss takes 120ns (hits still take just 10ns). What is the average memory access time (Ma)? What is the effective access time, using this more realistic estimate of memory access time?
      Ma = .96 * 10ns + .04 * 120ns = 14.4ns
      
      T = (1-.0000003) * 14.4ns + .0000003 * 10,000,000ns
        = 17.4 ns
      
    3. Rewrite the above equation, taking into account TLB misses. Assume that servicing a TLB miss takes 300ns, and that every address generated leads either to a page fault (occurring with frequency p), a TLB miss (occurring with frequency t), or a successful translation (occurring with frequency 1-p-t).
      	effective access time = (1-p-t) * Ma + p * f + t * TLBm
      
      	TLBm = TLB miss service time
      	t = tlb miss rate
      	p  = fault rate (faults/reference)
      	Ma = memory access time
      	f  = fault service time
      
      
    4. Using the Ma you calculated in part (2), the more accurate formula from part (3), and assuming a TLB miss rate of 2%, what is the effective access time?
      T = (1-.0000003-.02) * 14.4 + .0000003 * 10000000 + .02 * 300
      T = 23.1 ns
      


    brd@cs.washington.edu