CSE 451 - Winter 1999

Sample Solutions, Quiz 2

Chapter 3, Silberschatz and Galvin, 5th edition

  1. The five major activities of the operating system with regard to process management are:

  2. The three major activities of the operating system with regard to memory management are:

  3. The three major activities of the operating system with regard to secondary storage management are:

  4. The five major activities of the operating system with regard to file management are:

  5. The purpose of the command interpreter (or shell) is to provide a convenient interface between the user and the operating system. One excellent reason to keep the command interpreter separate from the kernel is that it's easier to have many different command interpreters, leaving the user free to select the interface he or she is most comfortable with. (If you want to run mysh as your command interpreter, for instance, you can.) Another reason is that the shell doesn't need to be part of the OS, and a lot of people think it's good for the OS to be as lightweight as possible. This is a design decision.

  6. Five services provided by the operating system and the conveniences they provide: You may have sensed a recurring theme in the above answers: a lot of the conveniences provided by the OS could technically be accomplished in userland if necessary. However, since these are resources that all user programs will want to have, it makes more sense to take care of them once and for all in the OS. The OS does things that users are too lazy, too careless, or too malicious to be allowed to do themselves.

  7. The purpose of system calls is to provide an interface for user programs to request operating system services.

  8. The purpose of system programs is to make the system useful. Imagine that you have a UNIX workstation on your desk, but that all it has is the operating system. It doesn't have ls or cat or cc or emacs or even vi. It's a fully working system, but you have no way of finding out what's in the file system, editing a file, or compiling it, so it's hard even to bootstrap your way to a useful system by writing all of your own programs.

  9. The layered approach to system design can make the process of implementing a very complex system cleaner and easier to think about. Each layer can take for granted that the layer below it will provide it with the interface it exports, and build from there.

  10. OS designers like virtual machine architectures because they can develop new OS techniques more safely and easily: since they are only working on a virtual machine and not a real one, system operation need not be interrupted for kernel hacking. (Normal kernel hacking makes the entire machine unstable and very unfriendly to other users.) Users like the virtual machine architecture because it provides them with the illusion of a whole machine all to themselves, identical to a real machine, even when there aren't enough real machines to go around. (NOTE: while the virtual machine concept is an intriguing one, it isn't considered a key concept of this course.)

  11. We like to separate mechanism and policy so that the system is flexible and can adapt to different workloads and users. Ideally, the mechanism should support a wide range of policies.

  12. See the email archive for this question. Again, this is not considered a key concept of this course, so if it's a little fuzzy, no worries.

Chapter 4, Silberschatz and Galvin, 5th edition

  1. Concurrent programming introduces all sorts of interesting new issues: How should the different processes share resources such as the CPU and devices? How should they be laid out in memory? How should processes be created and deleted? Where should their state be stored when they are not running? What facilities should be provided for cooperation and synchronization?

  2. The short-term scheduler selects from processes currently ready to run. It runs on the order of every 100 milliseconds. The medium-term scheduler is in charge of reducing the degree of multiprogramming in the system when necessary, by swapping some processes out. It runs only when necessary. The long-term scheduler really only exists on batch systems: it selects processes from a pool of jobs in mass storage and runs them one by one.

  3. A normal context switch requires the PC, the stack pointer, and the registers to be reloaded with the new process' state. The DECSYSTEM-20, which has multiple register sets, presumably needs only the PC and the stack pointer to be updated (unless there are separate PCs and SPs in each register set?) and for the active register set to be changed. If all register sets are in use, then one register set must be written to memory while the desired set is copied in from memory.

  4. Threads have two advantages over multiple processes: they share an address space, so communication is easier; and they are much lighter weight and easier to create and destroy quickly. However, if any thread makes a system call, then all user threads are stopped while the request is serviced. A web server might benefit from the use of threads so that it can handle many requests. A program to find large prime numbers, which is very compute intensive and does little I/O, might not.

  5. Threads require their own PC's, register sets, and stack. Processes require all of those things plus their own address space, open files, and signal handlers.

  6. To switch contexts between two threads, only the PC, the stack pointer, and the registers need to be updated. Process context switch requires all of those things, plus it requires OS intervention for scheduling, placing the old process on the ready (or waiting) queue and selecting the next one, and possibly mapping some memory.

  7. User-level threads are managed by the user process and are invisible to the kernel. However, they must either use asynchronous I/O or all block if any thread needs to do I/O. Kernel threads are managed by the OS. They take a lot longer to switch between, but they don't suffer the I/O problems of user threads.

  8. Producer:
    repeat
        ...
        produce an item in nextp
        ...
        while (in = out) && (last_to_go = producer) do no-op;
        buffer[in] := nextp;
        last_to_go := producer;
        in := in+1 mod n;
    until false;
    	

    Consumer:

    repeat
        while (in = out) && (last_to_go = consumer) do no-op;
        nextc := buffer[out];
        last_to_go := consumer;
        out := out+1 mod n;
        ...
        consume the item in nextc
        ...
    until false;
    	

  9. Consider the mailbox IPC scheme.
  10. IPC, or interprocess communication, is handy when you want multiple processes on the same machine to work together. For instance, you may want to connect up some processes with a pipe on the command line: "grep foobar *.h | less". Or, in perhaps a more familiar example, you may want to drag a piece of a spreadsheet in Excel into your Word document. These are examples of IPC.

    RPC, or remote procedure call, allows for a local process to request that some work be done by a process on some other machine. RPC can "make it look like" a remote procedure call is just a regular procedure call, except maybe not quite as reliable. Inside the guts of an RPC call, the arguments to the remote procedure are "marshalled," or bundled into a safe form for transmission over a network, and sent as a message to some remote process. The remote process (or the RPC handling layer under it) unmarshalls them, executes the procedure, and then sends back the result. This makes some client-server operations much less painful. There may be more on this if we make it to the distributed systems chapter.