Opal paper review

From: Reid Wilkes (reidwilkes_at_verizon.net)
Date: Sun Jan 18 2004 - 14:11:23 PST

  • Next message: Steve Arnold: "Review of Opal"

    This paper describes a prototype implementation of an OS named "Opal". There
    are several very significant features of Opal that distinguish it from many
    other well-known OS's. The most obvious, and the one on which the paper is
    primarily focused, is the fact that Opal provides one single virtual address
    space across the entire system. This contrasts sharply with basically every
    major OS as other OS's will provide a separate virtual address space to each
    process (or process-like construct) on the machine. Further, because the
    authors of the paper assert that the primary purpose of a process is to
    provide this private virtual address space, the concept of a process is
    dropped from Opal entirely and instead program execution is abstracted only
    as one or more lightweight threads. One of the major ideas the paper
    repeatedly hits on is that protection and addressing are two orthogonal
    issues. This point is emphasized greatly because one of the most obvious
    criticisms of the single address space approach would be that it does not
    provide the protection that a multiple private address space approach would.
    Thus, the paper describes a mechanism of "protection domains" in which
    threads execute and to which rights are assigned. In Opal, the "rights" just
    mentioned are actually termed capabilities, which makes this just another in
    a series of papers we have read describing capabilities based protection
    mechanisms. As capabilities are assigned to protection domains, they confer
    rights to act on units of memory referred to as "segments". Because segments
    are fairly coarse-grained chunks of memory, the performance of this
    access-rights-to-memory approach is stated to be acceptable - leaving the
    more fine grained control of protection to other levels of abstraction. This
    combination of a protection system, segmented memory, and single virtual
    address space combine to provide a system with at least equivalent
    capabilities to more traditional systems. Yet, the key idea behind unifying
    the virtual address space across the system is to greatly improve the
    performance and programming model for certain classes of applications. The
    basic idea is that when there are multiple independant programs working on
    shared data whose representation is a pointer-rich data structure, then to
    effectively communicate or pass parts of that data structure between
    programs requires costly processing of the data to adjust the pointer
    addresses between the different address spaces. This cost can be eliminated
    by having all programs operate in the same virtual address space, thus being
    able to exchange and operate on common data without having to perform
    manipulations or conversions on that data. The switch to a single virtual
    address space also clearly has implications on how programs are linked and
    how dynamic modules are loaded and executed - in most cases it seems
    reasonable to expect that such a system could help reduce or eliminate some
    very common and messy problems commonly dealt with in today's systems - such
    as module relocations at load time. Of course, one of the major concerns
    with a system such as this is depletion of the address space. The authors
    almost make a point of declining to speculate on what is a large enough
    address space such that depletion wouldn't be an issue, yet do indicate that
    a 64-bit system (which is the architecture on which they implemented the
    prototype) provides an extremely large address space which would be hard to
    imagine consuming in any reasonable bounded time frame given the general
    data usage rates incurred today. This is however, a potential fallacy in my
    opinion. There are countless examples throughout the history of computer
    technology where people made statements of what would be "enough" (disk
    space, processing power, memory, address bits, etc...) which were perfectly
    reasonable statements at the time they were made. Naturally, many of these
    assumptions have resulted in much difficulty in later years when they proved
    insufficient due to unforseen growths in the way and manner in which
    computers are used. All this is not to say that the single address space
    approach is severely flawed - in fact, I was extremely compelled by this
    paper and feel strongly that this is likely the direction the industry will
    eventually go. Only that the address space issue cannot be taken lightly at
    all and in general the only way to prevent from establishing assumptions
    which can potentially be broken in the future is to also build in mechanisms
    to recycle addresses no longer used.


  • Next message: Steve Arnold: "Review of Opal"

    This archive was generated by hypermail 2.1.6 : Sun Jan 18 2004 - 14:13:29 PST