Large page size: amortize the cost of disk transfers; smaller page
tables; better TLB coverage. Small page size: less internal fragmentation,
better locality (large page sizes may overestimate the locality of your
program).
The VAS is 2^24 (2^11 * 2^13) bytes, so each virtual address is 24
bits (assuming it's byte addressable). The physical address is 2^20
bytes (2^7 * 2^13), so each physical address is 20 bits.
|<------ 11 bits VPN ------><---------- 13 bits offset ------->|
(8.9) [ 3 points] On a system with paging, a process cannot generate a physical
address (because all addresses it generates are virtual, and then
translated to their physical counterparts by the OS), which is the
only way it could get to another process' memory. The OS could allow
two processes to share memory by making page table entries (PTEs) point
to the same physical frames. It should only do this when the two processes
wish to share the data on that page.
(8.11) [3 points] Two sets of virtual addresses will refer to the same physical
addresses, which means a process will think it has two identical copies
of the data. Systems use this trick when copying large amounts of
data from one place to another: rather than actually moving the data from
the source to the destination, just change the page table entries for
the destination to point at the source. This works as long as you don't
intend to modify the copied data, because if you do, you'll also corrupt
the source data (because they're the same). To get around this, systems
sometimes use "copy on write," where the above pointer trick is used until
the page is written, at which point a real copy is made.
(8.12) [ 2 points] Segments are difficult to lay out in memory, because they are
of varying sizes (and can possibly grow and shrink dynamically). By
combining paging with segmentation (paging the segments), we get the
"best of both worlds," although we need more complicated data structures.
(9.2) [4 points]
N = number of distinct pages in the reference string
M = number of frames
P = length of reference string
Lower bound: N (we need to bring the pages into memory at least once).
Upper Bound:
When (M >= N), we have plenty of frames and the upper bound is N.
When (M < N), we can't fit all the distinct pages into
memory, the upper bound is P. This is because at some point we'll
have to replace a page, and we might always pick the wrong
page to replace (for example, the very page we want to access next!!).
(9.11) [10 points]
effective access time = (1-p) * Ma + p * f p = fault rate (faults/reference) Ma = memory access time f = fault service timeIf we use the cache access time for Ma (assuming that all memory references are cache hits) and assume that translating the virtual address to a physical address is "free" (that is, every virtual address generated hits in the TLB), the above equation will give us a best-case estimate of memory access time.
T = (1-.0000003) * 10ns + .0000003 * 10,000,000ns = 13 ns
Ma = .96 * 10ns + .04 * 120ns = 14.4ns T = (1-.0000003) * 14.4ns + .0000003 * 10,000,000ns = 17.4 ns
effective access time = (1-p-t) * Ma + p * f + t * TLBm TLBm = TLB miss service time t = tlb miss rate p = fault rate (faults/reference) Ma = memory access time f = fault service time
T = (1-.0000003-.02) * 14.4 + .0000003 * 10000000 + .02 * 300 T = 23.1 ns