Lecture: file systems
preparation
administrivia
- lab 6 is out
- lab X: browse all the challenge problems in labs 1–6 and project ideas & talk to us
- check
SETGATE
for possible races
overview
- file system goals
- data persistence across reboots
- resource naming & sharing
- user-space programming interfaces
- review CSE 333 on low-level I/O
- file-system syscalls:
open
/close
/read
/write
/fsync
/link
/unlink
/…
- mmap syscalls: memory rw/
mmap
/munmap
/msync
- others: async I/O (e.g., Windows)
- questions
- why fd? how about syscalls using file names only?
- given a fd, can you get the corresponding file name?
- can multiple directories “contain” the same file?
- what happens on disk after running the following code?
you should be able to describe the details after this week
I/O stacks
- xv6
- per-process
fd
→ struct file
→ fs layers (fs.c
)
→ block I/O (bio.c
) → disk driver (ide.c
)
- real-world I/O stack:
Linux
block devices
- storage medium
- disk controller
- connect storage devices to the bus
- may perform virtual-to-physical translation (e.g., Flash Translation Layer)
- examples
- NVM Express: driver in JOS lab 5
- PATA (IDE): driver in bootloader & xv6
- SATA/AHCI
- others: VirtIO, SCSI - consider lab X
- read specs & write drivers: see JOS’s
kern/nvmereg.h
- device abstraction: block device