From: shearerje_at_comcast.net
Date: Wed Feb 11 2004 - 05:49:17 PST
This paper discusses the design of a virtual memory system for a low-end (at the time) commodity computing system. The fact that it was a commodity computing system changes the design decision paradigm from, say, the Mach OS research system The goal was to provide extended address space without compromising performance or significantly increasing the cost of the overall system. This meant designing for a specific hardware platform with well-known characteristics but not demanding that the hardware have special (therefore expensive) features to support virtual memory. Consideration was also given to making the system upgradeable without requiring modification of the user applications.
The system exposes a 32 bit virtual address space to each process. However, only half of this (2 GB) is actually memory reserved specifically for the process. The other half overlays a system region that is common to all the processes (2 GB seemed like a lot of space for a single minicomputer application to use in 1982, before anyone dreamed of home-editing of digital movies). The system region provided a mechanism for applications to address OS and driver calls. I thought it was particularly clever that the first part of this space contained a vector of indirections to the actual OS procedures later in the space. This allows the OS to be upgraded, changing the size and location of OS procedures, without re-linking the user applications.
The mapping between the virtual address space and real hardware addresses was performed at a page-level granularity. Each page was 512 bytes, so a 32 bit address broke into a 21 bit page address and a 9 bit offset address within the page. VMS does not separate addressing from protection, so the protection layer itself is implemented with the same page layer granularity. On interesting feature of this is that the first 512 bytes of process space is unusable. The designers recognized that the most common run-time error encountered (particularly in C programs) is an attempt to use an uninitialized or NULL pointer, which will point to address 0. So they made address 0 off limits and, due to the protection granularity, this cost the use of the entire page. If they had designed the system so that the system space came before the process space and grew downward, this page could have been part of the “reserved” portion of system space.
Paging was managed in two tiers. The “Pager” worked with the pages of a specific application, and the “Swapper” worked with whole applications. The beauty of this approach is that an ill-behaved application could only cause excessive page faulting in itself. It could not force the system to page-out another application’s pages. The swapper, on the other hand, moved the image of the entire application to and from local memory to keep the highest priority processes resident.
The “Pager” grouped application pages into three categories. A list of free pages ready for use, pages that are currently in use, and pages that have been modified and are ready to be written to disk. I found it interesting that “free pages” included resident pages that been paged out without being written to. The effect of this was that a page fault to a page that was still on either the free list or the modified list would bring that page right back into in-use memory without requiring a disk access. That is a pretty cool efficiency feature.
I did not understand the section on demand-zero pages and copy-on-reference pages. When and how would I use these features?
This archive was generated by hypermail 2.1.6 : Wed Feb 11 2004 - 05:49:25 PST