Peter Henry
CSE576 Computer Vision
Project 2

Panoramic Image Stitcher

Test Images:


Viewer Link

Kaidon Tripod Images:


Viewer Link

Hand Held Images:


Viewer Link

Description

I followed the project description. The feature matcher found good matches using 200 RANSAC iterations and a threshold of 1. All three image sets obtained good overall translational matches. The feathering algorithm produces descent results, with some blurriness due to camera rotation and translation between shots. Also, for the kaidon tripod shots, significant time intervals passed between successive used images, because I was waiting for the shot to be clear of moving objects. This caused some clouds to move, and for the scene illumination to vary.

The only feature I added to the basic project description was an optional offset parameter for the blendPair command, specifying vertical shift to allow as much of the captured images as possible to be in view following the automatic shear transformation used to correct for vertical drift. I also implemented python scripts to automatic the entire process, allowing for more efficient experimentation.

I was impressed with the effectiveness of the feature matching, which correctly registered all images sets on the first try. Even the successive hand-held shots aligned well, but the output is suboptimal due to inconsistant camera location and view direction.

Possible Improvements

A good improvement would be to implement full 3D rotations as the camera model to correct for camera rotations. Also, other blending strategies such as graph cuts or pyramid blending should be tested. Also fun would be to add multiple rows of images to take advantage of the spherical warping.