Project 2 Report: Autostitch

CSE576, Spring 2008

Xiaoyu Chen

April 29, 2008

In this project, I developed a system to automatically stitch a sequence of images into a panorama. Firstly, applying the feature detector developed in the first project, the system detects the discriminative features from each image, and identifies the matching features for a pair of images. Secondly, the system aligns a pair of images by computing their relative positions and displacements, based on their best matching features. Finally, the system stitches the aligned pair-wise images and blends them into a seamless panorama.

Methods

Warp images

Given an image, the planar coordinates need to be converted into spherical coordinates. The equations of coordinates mapping were given in Dr. Szeliski’s lecture notes “image stitiching” (p.46). Moreover, the radial distortion in the planar coordinates should be removed after mapping to the spherical coordinates. The equations of modeling radial distortion were given in Dr. Seitz’s lecture notes “projection” (p.28). For this step, the camera parameters including focal length and radial distortion parameters (i.e. K1 and K2) are provided.

Align pair-wise images

The matching features of a pair of images can be identified by applying the first project. Based on those matching features, we can estimate the motion model for the two images. In this project, we assume that the motion model is translation only. Especially, RANSAC is used to select a set of feature matches to estimate the motion model.

Stitch the aligned images

Given the warped, aligned images with the pair-wise translation, the final step is to stitch the images into a seamless panorama.

Results

The test sequence

This is the provided test sequence.

full resolution, interactive viewer

A sequence taken with Kaidan head

Sequence I was taken in the CSE Atrium using Kaidan head.

full resolution, interactive viewer

Another sequence taken with Kaidan head

Sequence II was taken on the campus using Kaidan head.

full resolution, interactive viewer

A hand-held sequence

Sequence III was taken on the campus without using Kaidan head.

full resolution, interactive viewer

Discussion

The system has succeeded in automatically stitching four different image sequences. In the panorama of Sequence I, the upper right corner of the LED wall is a little bit blur. This is because the light flow was moving, and it could introduce errors when matching features along the wall. In the panorama of Sequence II, the wall close to the right end is not aligned very well. This actually shows the importance of the feature detector in the first place. Due to the color of the wall, it may be difficult for the feature detector to find the correct feature matches on the wall. In the panorama of Sequence III, there are a couple of places along the baluster with slight ghosts. Because this is a hand-held sequence, the camera might be slightly rotated when taking the pictures. Therefore, our simple motion model of translation may not capture the motion completely.