CSE576 Project 2:
Panoramic Mosaic Stitching

Noah Snavely --- 04/28/2005

 

What I Did Results!

What I Did

I implemented a panoramic image stitcher that uses my features from Project 1 to register pairs of images. It assumes that all the input images belong to a single panorama, and are given in order. The main extension I implemented was to use absolute orientations to solve for a rigid transform between images, rather than a pure rotation. This helped in several cases.

For the blending routine, I used a simple alpha weighting scheme where pixels towards the center of the image were given a higher weight than pixels towards the edge. Essentially, this is a feathering method where the blend width extends over the entire image.

[Back to top]


Results!

Here is a panorama of the test sequence:



Click here to view the Quicktime VR version.

As you can see, I didn't attempt to compensate for exposure differences between the images.
Here's a panorama of everyone's favorite stadium, Husky Stadium, shot with a Kaidan head. Luckily, no Huskies were in the stadium at the time. See if you can guess what time these images were taken!



Click here to view the Quicktime VR version.

This was a tricky panorama to create, because we titled the camera upward so that the entire stadium was in the field of view, but we didn't measure the angle of the tilt. It turns out that under these circumstances you can't explain the motion of the images with a pure translation, so I estimated a translation plus rotation using absolute orientations, and got this image:



Estimating the radius and angular extent of this circular segment, and using the known focal length, I recovered the angle of the tilt using geometry. Then I rewarped the images, rotating the camera by the recovered angle, and stitched those to create the "flat" panorama shown above. The result isn't perfect (there is a bit of blurriness not present in the curved image), since the angle wasn't accurately recovered. Also, there is a seam where the clouds aren't perfectly aligned (they moved a bit while we were resting after shooting half of the panorama).
Here's another panorama shot with a Kaidan head, this time in the basement of the HUB. After taking all those pictures, I was ready for a break and went bowling.



Click here to view the Quicktime VR version.

Because there was so much motion in the scene, there are several ghosts in this panorama. I didn't try to get rid of them.
Finally, here is the result of stitching a handheld sequence (using only a translational motion model):





Click here to view the Quicktime VR version.

As expected, there is a lot of ghosting in these images. However, not all of it is due to parallax. When I used a rigid motion model, I got better results, and I could see that the panorama was curved, as with the stadium shots.

This suggests that I wasn't holding the camera level, but was tilting it slightly downward. Of course, even with this correction, objects (especially those nearest to the camera) exhibit ghosting.



[Back to top]