CSE 451, Introduction to Operating Systems, Spring 2013

VMware Virtual Machine Information

Installing

In this course, we will be using a specially-prepared virtual machine image for development work on some of the projects. The VM is standard as far as VMs go. If you've used VMware or Virtualbox or ... before then there are no surprises. The documentation here should be enough to get you up and running, but if there are any questions just come to office hours. Better to get any issues with your VM ironed out sooner than later!

It is already installed on the Lab Windows machines. Start VMware Player and open the Fedora Core virtual machine located in C:\VM. The first time the machine is opened, it will ask if it was moved or copied. Choose copied. Similarly, go ahead and install the VMware Tools for Linux if prompted. The lab VM image is reset whenever you log out.

If you're working on your own computer, the VM image is available from attu and the other CSE servers at /cse/courses/cse451/13sp/FC18-CSE451-2013.tar.gz. This is a symlink to the most recent version of the VM image. You can grab it with scp. On Windows, if you don't have an SSH program installed, you can either

  • use something like Filezilla that has a full GUI,
  • or download pscp.exe, the putty command line program.
On Linux (and Mac?) you can install and use scp. Mac probably has something prettier, but I can't know; ask on the message board if you like.

To use this virtual machine image, you have two options: either use a lab machine with VMWare player installed (one of the Windows ones) or your own machine. For a personal computer, after downloading, just unzip FC18-CSE451-2013.tar.gz into a new directory. The uncompressed size is about 20GB. Then, from VMWare, choose File->Open and select the .vmx file. For a lab machine, open C:\VM\FC18-CSE451-2013-3-19\Fedora 64-bit.vmx.

Next steps

Loging in:The username is 451user and the password is 451userpassword. Each lab machine has a local copy of the VM and you can get root access with sudo.

Customizing the VM: Feel free to edit the virtual machine definition from within VMware so that it has more processors or memory or whatever. NOTE: the VM file is reset whenever you log out of a lab machine, there is no state saved. Saving files is your responsibility.

Getting/preparing the kernel: You can

  1. Use the VM on Windows. But there was a mistake with the image that was loaded and you need to delete ~/rpmbuild to free up disk space. Then git clone (or copy) your edited source into the VM. This will have to be done each time you log into Windows, unfortunately.
  2. Use forkbomb as described in Project 1
  3. Use your own computer running 64-bit Linux (or Mac if you like pain).
In any event, once you have the kernel built, it can be copied into the FC18 VM using ssh or by setting up shared folders. In VMware, shared folders are enabled by clicking "Edit virtual machine settings" when the machine is powered off, then options tab->Shared Folders. You can, for instance, share your Z:\ drive and it will show up in /mnt/hgfs. NOTE, you need the most recent version of VMware tools installed to do this. You can also use sshfs to mount your CSE home folder in a location of you choosing. Come to office hours if there's any confusion about it.

Running your kernel on the FC18 VM: There is a nicer way, explained below. Remember, we are using specifically kernel version 3.8.3-201. There is already a bootloader entry for the kernel vmlinuz-3.8.3-201.cse451custom.fc18.x86_64 so all you have to do is over-write (using sudo) that file with your own kernel. Then, reboot and if it is not already selected in grub (the bootloader) just select it with the arrow keys.

If for some reason you introduce such horrific bugs in the kernel that it won't boot, there are backup kernels selectable from grub that will. Feel free to go wild. At the very worst just log out and back in to reset it.

(Advanced) Running your kernel on Qemu:

This is the nicer way. It is more work to set up, but much more convenient to work with once you do. Qemu runs a Debian VM in console mode and takes less than 5 seconds to boot to a shell. AND, you can use GDB on the host to debug the kernel. Instructions for this will come out soon.

The virtual machine image provided here is based on Debian Wheeze (latest Debian version) and includes gcc, ssh and common utilities. It is a standard Debian installation that has internet connection to the outside world.

Note for the following that the root directory of your Linux source will be called LDIR. This would be something like $HOME/cse451/linux-3.8.3-201.cse451custom.fc18.x86_64, i.e., wherever you can run make menuconfig.

Qemu Installation

Debian / Ubuntu apt-get install qemu-system
Fedora yum install qemu
Attu / Forkbomb I have compiled Qemu and it is available by adding /cse/courses/cse451/13sp/local/bin to your PATH in $HOME/.bashrc
Source You can download and compile Qemu yourself. Any recent version (later than about 1.2) should be fine.
Windows There are precompiled Windows binaries available, but I haven't done any testing with them.

The best way to run Qemu is directly from a Linux distribution. If it's not possible to do with KVM (see below) on the lab Linux machines, let us know. It is possible to run from the Fedora VMs on a Windows machine, if anyone is interested in doing this, there are some tricks that may allow using KVM (see below) from within a guest. Ask if you're curious.

Kernel Setup

Your stock kernel will work on Qemu, but to enable networking and a few other bells and whistles, some of the drivers need to be changed from modules to built-in. You can download qemu-config and rename it to .config in the root kernel directory (don't forget to recompile!). Or, run make menuconfig in LDIR and make the following changes:

Device Drivers
   -> Network device support
     -> Virtio network driver # this enables the virtual network card
    -> Ethernet driver support
      -> Intel PRO/1000*

File systems
  -> Second extended fs support
  -> Network File Systems
    -> NFS client support
       -> ALL suboptions
    -> Root file system on NFS  # for so called diskless / thin client setups
    -> NFS server support
      -> ALL suboptions

Networking support
  -> Networking options
    -> IP: kernel level autoconfiguration # for more advanced setups, e.g.
      -> IP: DHCP support                 # providing the kernel over the network
      -> IP: BOOTP support
      -> IP: RARP support 

VM Installation

Start by downloading the

  • VM image at /cse/courses/cse451/13sp/wheezy.raw.bz2
  • VM start script at /cse/courses/cse451/13sp/qemu-start.sh
The VM image will be 400MB uncompressed with a few hundred MB of free space left on it. 400MB is small enough to run inside the Fedora VMware image if you like. There is sufficient space to install small packages via apt-get install, but for running bigger programs you'll need to create a new Qemu disk and copy the contents over. See below.

Here are the steps to get Qemu running.

  1. Uncompress the wheezy.raw.bz2 file somewhere.
  2. Edit qemu-start.sh and change the KERNELDIR variable to point to your bzImage (i.e. LDIR/arch/x86/boot/bzImage).
  3. Assuming that qemu-start.sh and wheezy.raw are in the same directory, start up with
    $ bash qemu-start.sh
    
  4. Modify your kernel, shutdown the guest with shutdown -h now, recompile and goto 1.
Note that you don't have to copy the kernel image anywhere, Qemu is happy to use the one in the Linux source tree.

Debugging with GDB

By default, qemu-start.sh will start Qemu listening for a GDB connection. There is a quirk in GDB such that it has trouble switching to 64-bit mode from 32-bit mode, which is what happens when a processor boots into 64-bit Linux. This means that you shouldn't connect GDB until after the kernel has booted. That is, wait till after the message Freeing unused kernel memory: ....

To connect, you need to let GDB know about the symbols available in the kernel. That information is in a binary file, LDIR/vmlinux. There is another vmlinux file, LDIR/arch/x86/boot/vmlinux. That is not the one you want. So, assuming that Qemu has been started by the qemu-start.sh script, you can connect and do normal GDB things with

$ gdb linux-3.8.3/vmlinux
GNU gdb (GDB) 7.4.1-debian
...more startup messages...
Reading symbols from LDIR/vmlinux...done.
(gdb) target remote localhost:1234
Remote debugging using :1234
native_safe_halt () at LDIR/arch/x86/include/asm/irqflags.h:50
50	}
(gdb) break sys_open
Breakpoint 1 at 0xffffffff81188850: file fs/open.c, line 971.
(gdb) cont
You must execute the cont command in GDB for your VM to become responsive. Now, try any program that uses the open syscall and watch GDB hit your breakpoint. Don't forget, cont in GDB to keep going in Qemu.

It is ok to exit out of GDB at any time. Qemu just keeps on trucking. You can reconnect with GDB at leisure, as well.

Debugging kernel modules with GDB

For this short tutorial I will assume that we are debugging the ext2undelete kernel module from project 3, but the same process applies to debugging any kernel module. First you'll want to make sure that your kernel module is compiled with a low level of optimization to facilitate debugging (revert this change when you're done). Somewhere within the Makefile that was provided with the starter code, add the following line:

ccflags-y := -O0

After rebuilding, copy the kernel module into Qemu and then load it:

> insmod ext2undelete.ko 
[    7.229942] Disabling lock debugging due to kernel taint
[    7.236444] [undelete_init] Loading module undelete
[    7.240665] [build_super_block_map_iter_fn] Adding mapping 1 -> sda
[    7.243472] [build_super_block_map] Found 1 ext2 filesystems
[    7.245445] [undelete_init] Allocated 1 chrdevs, major=248, first minor=0
[    7.247507] [undelete_init] Created a node for file system sda under /dev/undelete/sda

Next, look up the addresses of the sections within the loaded kernel module. First navigate to /sys/module/ext2undelete/sections/:

> cd /sys/module/ext2undelete/sections/
> ls -A
.bss                       .init.text          .smp_locks  .text
.exit.text                 .note.gnu.build-id  .strtab     __mcount_loc
.gnu.linkonce.this_module  .rodata             .symtab

This directory contains a file for each section of the kernel module. For this example we'll use .text, .rodata, and .bss, but you might also care about symbols or code that are located elsewhere. To see the addresses of these sections, simply cat the corresponding files:

> cat .text .rodata .bss
0xffffffffa0000000
0xffffffffa0001030
0xffffffffa0002260

Now after following the instructions above to connect to a running kernel using GDB, you can add the symbols from the ext2undelete.ko file given these offsets:

(gdb) add-symbol-file 451repo/project3/ext2undelete.ko 0xffffffffa0000000 \
      -s .rodata 0xffffffffa0001030 -s .bss 0xffffffffa0002260
add symbol table from file "451repo/project3/ext2undelete.ko" at
.text_addr = 0xffffffffa0000000
.rodata_addr = 0xffffffffa0001030
.bss_addr = 0xffffffffa0002260
(y or n) y
Reading symbols from 451repo/project3/ext2undelete.ko...done.

Finally, the fun part! You can now set breakpoints within the kernel module, poke at its variables, etc. Note that inside of Qemu I execute cat /dev/undelete/sda after continuing within GDB in the example below.

(gdb) b undelete_read
Breakpoint 2 at 0xffffffffa0000a11: file 451repo/project3/undelete.c, line 348.
(gdb) c
Continuing.

Breakpoint 2, undelete_read (filp=0x8000, buf=0xffff8800065f3600 "", 
    count=18446612132421121552, offp=0x20000)
    at 451repo/project3/undelete.c:348
348                             size_t count, loff_t *offp) {
(gdb) n
350    struct undelete_dev *dev = (struct undelete_dev *)filp->private_data;
(gdb) 
353    DBG("%s called on dev %d, %d\n", __func__,
(gdb) 
356    *offp = 0;                  // cuz we can't really seek...
(gdb) p dev->super_block->s_id
$5 = "sda", '\000' <repeats 28 times>
(gdb) p num_super_blocks
$6 = 1

More Stuff

Speeding up boot: Debian normally executes a bunch of scripts when it starts up. If you don't need to use ssh, or connect to the internet, or anything fancy, and just want to run a test program, you can change this. There are two ways.

  1. Change the guest's "init" program. Edit qemu-start.sh and add init=/yourinitprogram to BOOTOPTS. Suggestions include just straight up /bin/bash or hey, even your amazing fsh. You may also need to set CONSOLE="" because the default I/O device for Qemu is a virtual screen, not the serial line that the console represents.
  2. Change the guest's /etc/inittab. The program "init" reads a file called /etc/inittab when it starts up that tells it what to do. It's syntax is a bit esoteric. I'm still trying to get this working in a convenient way that doesn't depend on a lot of steps. If anyone figures it out, let us know...

Mounting: wheezy.raw is an ext2 partition in a file. You can mount it directly via

$ mkdir mount_dir
$ sudo mount -o loop wheezy.raw mount_dir

# cd into the mount_dir
# make some changes?  look around.  have a good time.

$ sudo umount mount_dir
Just make sure to un-mount it BEFORE running it in Qemu...

Expanding the disk size: To do this you need to create a new image file and then sudo cp everything from the old one. Like so (for a 10GB image):

# Create a new ext2 file system in a file
$ qemu-img create -f raw newImage.raw 10G
$ mkfs.ext2 newImage.raw # answer 'y' to the it's a file question

# Make the file systems accessable from the host
$ mkdir oldImage
$ mkdir newImage
$ sudo mount -o loop wheezy.raw oldImage
$ sudo mount -o loop newImage.raw newImage

# Copy ALL files, including special files like the device nodes in /dev
$ sudo cp -avr oldImage/* newImage/

# unmount to avoid corruption when running Qemu
$ sudo umount oldImage
$ sudo umount newImage

# overwrite
$ mv newImage.raw wheezy.raw
Of course, you need to have root power to do this. That's the case in the Fedora VM and on your own computers.

Transferring files: You can use scp/sshfs as described elsewhere, NFS if you have your own server, or use the mounting trick.

Final notes

  • Running without KVM is a pain. Expect to spend 6 or more minutes booting into the stock Debian environment. If you speed up booting as described above, then it's back to a few seconds for the kernel itself. That's fine just to test out your syscall or library, but I wouldn't recommend doing much more.
  • If you've come this far, congrats! Qemu is a great tool. You can do embedded ARM development in a Qemu virtual machine, from the comfort of your living room. You can experiment with cross-platform, cross-cpu compatibility. You can make entire virtual networks to test your newest routing protocol.