8.1 External fragmentation is the breaking up of free memory into small chunks via partitioning, which can mean a request for a larger partition later may fail due to lack of contiguous memory, even though enough total memory is available. Internal fragmentation means allocating a larger than necessary partition (perhaps because of fixed-size partitions), leading to wasted space. 8.4 a. Supporting dynamic memory would require pre-allocating enough space for the process to contain the maximum-sized heap, potentially wasting vast amounts of memory. b. This requires either a heap segment, pre-allocated to the appropriate size, or the ability to add segments at execution time (thus growing the heap as required). c. If paging, all that is needed is support for a large enough virtual address range, thus a larger page table. Memory will be added as necessary when the appropriate heap addresses are first referenced. 8.5 a & b. Continguous-memory allocation and segmentation use variable-size blocks of memory, thus minimizing internal fragmentation at the expense of possible external fragmentation in trying to fit these blocks into memory. Paging uses constant-size blocks of memory, and thus minimizes external fragmentation at the expense of internal, if the memory allocated is less than a page. c. Code sharing is difficult if not impossible in contiguous-memory allocation, since each process has an independent memory space. In paging or segmentation, we can map portions of each virtual address space to the same physical space, allowing sharing. 8.11 In segmentation, sharing is easy. Just assign each sharable section of code to its own segment. Then, a single identical mapping allows two processes to share the code. In paging, we would need to map a particular set of pages to the shared code. This means that we must change multiple page-table entries. Further, each section of sharable code would need its own set of pages, meaning internal fragmentation might be an issue. 9.2 We need secondary memory as a swap space, a hardware page table with valid bits, and restartable instructions, so we can recover from a page fault. For efficiency sake, we probably also want a hardware TLB. 9.3 Copy-on-write duplicates an address space by simply mapping a child process' address space back to its parent, and only making an explicit copy when the parent or child writes the the page. This requires hardware support to inform the OS when there is a write to a shared page, so the copy can be made. 9.4 Given the instruction referencing the address 11123456, the hardware first determines the page number (11123456/4096 = 2715) and offset (11123456%4096 = 2816). It then examines the page table entry for page 2715. If the valid bit is set, then there will exist a frame number corresponding to that page, and the physical location will be 2816 bytes into that particular frame. If the valid bit is not set, the hardware traps to the OS. The OS then determines if 2715 is a valid page. If so, it is on swap and must be paged in. The OS determines a free frame, or executes a page eviction to make a free frame. The physical location will then be 2816 bytes into that frame (after the OS updates the page table and restarts the instruction). 9.6 The clock pointer moves fast when it is forced to skip over a series of pages with reference bits set, and slowly when it simply chooses the next page because that page's bit is not set. Thus the pointer moves quickly when the pages currently in memory are still being accessed frequently (and thus having their reference bits set), and slowly when most references are to pages not in memory (and thus those that are in memory have their reference bits cleared). 9.14 The access time is (in microseconds): 0.8(1) + 0.18(2) + 0.02(20000+2) = 401.2 9.17 Setting delta to a low value increases the page fault frequency (since fewer full working sets will fit into memory) but increases the number of active processes (since with smaller working sets, we can fit more processes into memory). Increasing delta does the exact opposite. 9.20 Using multiple page sizes allows a lower degree of internal fragmentation, since we can allocate a page just large enough to handle whatever memory we are allocating. We can also reduce the both the total I/O and page fault frequency by more closely matching program locality (note that this scheme will have the benefits of both large and small pages sizes described in section 9.9.2). The problem with multiple page sizes is that the hardware can no longer immediately discern the page number from the virtual address, since each page will be a different size. Assuming we have a finite number of page sizes, though, we can design a tiered page table system, in which each table entry is either a frame number or a pointer to another page table. These tables are arranged in decreasing order of page size, and we follow the chain of tables to progressively disambiguate the size of the page, and then determine whether it is in memory or not.