Panorama-Rama

My program obtained results that closely matched the solution's, both for test and my own images. Some slight distortion could be noticed, but it's hard to say which is better. I cannot get the viewer to work...perhaps because I am not on a CS server...? Everything done was fairly standard, except for the hat blending is done as a trapezoid, with each blendWidth size side of the image linearly increasing/decreasing in alpha value.

The test image sequence containing 4 images:

full-size panorama

Here's a panorama of Red Square that turned out ok using the Kaidan head:

full-size panorama

viewer

Here's a panorama from the deck of the Allen Center, once again with the Kaidan head:

full-size panorama

viewer

Here's an experimental vertical panorama we took inside the Allen Center with the Kaidan tripod lying on one of the tables:

full-size panorama

viewer

Here's an image of my driveway without the Kaidan head, taken on an Olympus C5050 camera, I have guessed the focal length to be around 700 pixels, but it is a rough estimate. The image is definetly not as sharp as could be with a Kaidan head, but still decent, the blurring may be due to an incorrect estimate of the focal length. This particular panorama contains 17 images:

full-size panorama

viewer

The above panorama originally contained 33 images, but that resulted in a more ambiguous image:

full-size panorama

Images of objects too close to the camera did not work well, as the motion is too large for Lucas Kanade to work well - this can really be seen inside the Allen Center image, next to the floor.

Being lazy with the handheld did not work well either, the following image has only seven images:

full-size panorama

Unfortunately I have lost the images, but panoramas of indoor areas with lots of furniture or machinery work well (inside More Hall), as there are a lot of easily distinguishable lines which can be matched.

I have also tried my program on images that I took this break, before knowing how to take decent panoramas:

This does not work as well as autostitch, as I was not being consistent with the x-translation and ran a script so I had identical displacement estimations. A slightly better image can be obtained by estimating the displacements pairwise.

Overall, I have noticed that playing with the X displacement values in Lucas Kanade affects the results much more than focal length, while the k values are probably the last thing to play with. Finding the correct Y displacement formula for the affine transform was also somewhat tricky, as the test sequence seems to work even when other sequences are significantly off.