1. 9.4/10.4. Which of the following program techniques and structures are "good" for a demand-paged environment?

    1. Stack - Good. Memory is allocated and data is accessed in a sequential pattern, so we good locality for prefetching.

    2. Hashed symbol table - Bad. Each access can potentially go to any allocated page of the table.

    3. Sequential search - Good. Again, memory accesses are sequential.

    4. Binary search - Bad. Accesses are all over the place.

    5. Pure code (read-only code) - Good. Since code usually has good locality, prefetching will work well.

    6. Vector operations - Good. Operating on a whole array at a time will have lots of sequential accesses (up each element of the array.

    7. Indirection - Bad. The locality of the pointers in the code is betrayed since what they point to can be anywhere.

  2. 9.9/10.9. Which of the following will increase CPU utilization?

    We are obviously over-allocated for physical memory.

    1. Get a faster CPU - No. We are already under-utilized.

    2. Get a bigger paging disk - No. If this were the issue, we'd be out of memory and not doing anything.

    3. Increase the degree of multiprogramming - No. We don't need more allocation.

    4. Decrease the degree of multiprogramming - Yes

    5. Install more main memory - Yes. We won't be as over-allocated, so we'll spend less time paging and more doing actual work.

    6. Add prefetching - Yes. Assuming the code has reasonable locality, we'll be leveraging our disk reads to fetch more data, so we'll have fewer faults.

    7. Increasing the page size - Maybe. If the code has good locality, this is the equivalent of prefetching. If it doesn't, then the larger page size means we can keep less in memory, so we'll have more page faults rather than less.

  3. 9.12/10.12. How to simulate a reference bit in a system that doesn't support one?

    Every system needs a valid/invalid bit. Usually when a page is brought into memory, the valid bit is set. However, we could track valid pages in the OS instead, and when a page is brought into memory, keep the bit at invalid. Then the first reference would generate a trap to the OS. The OS would know whether this was a page fault or a first reference, and could either handle the fault or set an internal reference bit for the page.

  4. 10.11/11.11. Describe a problem that arises if users are allowed to access directories as files, and how to solve it.

    The directory files must track the physical locations of the files in that directory. Given this, a user could access those files regardless of any protection scheme, by physically looking at the disk blocks. So they can basically violate the protection scheme. We can solve this by giving access to directories only through system calls, so we can protect how a user can access the directory file.

  5. 10.13/11.13. File control list vs. user control list.

    A file control list makes it easier to change permissions for a particular file, and assuming we use a file protection scheme like UNIX (rather than an access control list), will take less space. A user control list reduces the overhead of opening a file (since we just check if it's on the list), and can make it easier to revoke file permissions for a set of files for a user.

  6. 11.4/12.4. Why must the allocation table be kept on mass storage rather than in memory?

    In the event of a disk crash, we don't lose the free space list, which we be very difficult to reproduce.

  7. 11.9/12.9. How do caches improve performance? Why don't we use more of them?

    Caches allow components of differing speeds to communicate more effectively, since the slower device can store data needed by the faster device in the (also faster) cache. Since caches are very expensive, we can't use more and still keep the system cost reasonable.