Lecture: virtual memory applications
preparation
  - read OSPP §9, Caching and Virtual Memory and §10, Applications of Memory Management
- take a look at the lazy allocation exercise
lab 2 questions
  - boot_alloc(): what should it “allocate” & return
- pgdir_walk(): why to allocate “a new page table page”
- make sure you understand exercise 8, lab 1
    
      - it’s easy to “understand” the high-level concepts; implementing them is another story
- go to this week’s sections, office hours, or schedule extra hours
 
virtual memory recap
  - CPU asks OS to set up a data structure for VA → PA
- isolation: each process has its own address space
    
      - per-process page table; flags (P/W/U/…)
- switch page table with process
- xv6
        
          - struct procin- proc.h
- scheduler()→- switchuvm(p)→- lcr3(v2p(p->pgdir))
 
 
- exercise: re-read chapter 2 of the xv6 book
    
      - what’s the initial setup in entrypgdir?
- focus on kinit1()andkvmalloc()inmain(),main.c
- a few questions to help you understand the code
        
          - how does xv6 alloc/free physical memory
- how does the free list work
- in walkpgdir(), are the permissionsPTE_P | PTE_W | PTE_Ucorrect (overly generous)?
 
 
example: page fault
volatile char *p = (volatile char *)0xcafebeef;
cprintf("XXX: %x\n", *p);
  - add the above code to i386_initand see what happens
- invalid read/write
    
      - add two lines of code in JOS to read an invalid address
- with unpatched QEMU: reboot
- with patched QEMU: stop & print registers
        
          - useful for debugging
- what are the values of CR2, CR3, EIP
 
 
example: isolation
  - read address 0 vs. read KERNBASE(0xf0000000) from lab 1, exercise 8
      - the same value - why?
- how about write? what makes the difference?  try it your self
 
- kernel runs in high virtual addresses
    
      - kernel is mapped into every process’s address space - why?
- commonly used by today’s OSes - why high VAs for the kernel
- how does the kernel set this up - see comments in kern/entry.S in JOS
 
examples: protection, virtualization, lazy allocation
  - protect against stack overflow
    
      - see Michael Barr’s Bookout v. Toyota, “Toyota’s major stack mistakes”
- trick: put a non-mapped page right below user stack
- JOS: inc/memlayout.h
 
- implement null pointer dereference exception
    
      - how would you implement this for Java, say obj->field
- trick: put a non-mapped page at VA zero
        
          - useful for catching program bugs
- limitations?
 
 
- limited physical memory
    
      - applications need more memory than physical memory
        
          - early days: two floppy drives
- strawman: applications store part of state to disk and load back later
- hard to write applications
 
- virtual memory: offer the illusion of a large, continuous memory
        
          - swap space: OS pages out some pages to disk transparently
- distributed shared memory: access other machines’ memory across network
 
 
- memory-mapped files
    
      - mmap(): map files, read/write files like memory
- simple programming interface
- when to page-in/page-out content?
- avoid data copying: send an mmaped file to network
        
          - compare to using read/write
- no data transfer from kernel to user
 
 
- copy-on-write fork
    
      - strawman fork: copy all pages from parent to child
- observation: child and parent share most of the data
        
          - mark pages as copy-on-write
- make a copy on page fault
- lab 4: you will implement user-level copy-on-write fork
 
- other sharing
        
          - multiple guest OSes running inside the same hypervisor
- shared objects: .so/.dllfiles
 
 
- grow stack on demand: see the next in-class exercise