Matthew Kerner's Panorama Project
In order to view the Panoramas you will have to have
Java installed. I have added some comments on the project and extra credit notes
below the images.
Carpool Panorama (Kaidan)
Created with a Canon PowerShot S10 and a Kaidan head

Panorama
Office Panorama (Kaidan)
Created with a Canon PowerShot S10 and a Kaidan head

Panorama
Office Panorama (free)
Created with a Canon PowerShot S10

Panorama
Tutoring Panorama (Kaidan)
Created with a Canon PowerShot A10 (CS30012716) and a Kaidan head

Panorama
Tutoring Panorama (free)
Created with a Canon PowerShot A10 (CS30012716)

Panorama
Sample Image
Created from the sample images provided for the project

Panorama
I created these Panoramas using cylindrical warping, Lucas-Kanade motion estimation
& feathered blending. There were several lessons learned:
-
Observations on what worked well and what didn't work well
-
The Kaidan head makes a huge difference as I am evidently very bad at taking panoramas without a
tripod.
-
Stitch assist does help somewhat.
-
The Kaidan head makes it possible to estimate a displacement between one pair of adjacent images,
and reuse it as the Lucas Kanade seed for all image pairs in the sequence.
-
Without the Kaidan head, it is necessary to manually fine-tune the seed, and reduce the L-K levels
and steps to minimize the displacement that it adds on top of the seed. Interactive manual
adjustment would be a useful feature for these sequences.
-
For free sequences, it would be useful to have a rotation-based correction in the software. There
were several image pairs that I couldn't line up properly because the transformation was not a simple
(u,v) displacement. With SIFT, this would not be an issue: I could calculate the rotation, or a
general affine warp, to make the images align properly.
-
There were several adjacent image pairs whose interface contained very few contrasting
pixels. These required additional manual alignment. For example, the tutoring
panoramas contained image pairs that met just to the left-hand side of the door to the
room. The only contrasting pixels were located around the recycling box near the bottom
of the image. I had to manually tweak the initialization parameters before I could get
the parts of the box to line up properly. SIFT or brute-force search with a least-squared
error function would be a good way to discover an optimal seed without manual intervention.
-
In the tutoring free sequence, there were two pairs of adjacent images which were taken such that
they didn't overlap at all. I had to reduce Lucas Kanade to one level with one iteration, and
manually align the images as close as possible. This is where stitch assist makes a big
difference.
-
You can see one just to the left of the first computer near the door
-
You can see the other just to the left of the left-hand computer below the windows
-
Vertical drift is an issue, regardless of whether or not the photos were taken with the Kaidan
head.
-
The camera parameters (focal-length, k1, k2) are important. I was unable to line up the
images that I took with my own digital camera exactly, because I didn't have those parameters and I
didn't have a chance to use the camera calibration kit to find them.
-
Extra Credit/Non-Standard Implementation
-
Rather than just cropping the image on the x-axis, I also cropped the image on the y-axis to
eliminate the areas that were empty due to vertical drift. Some of my image sequences
went clockwise, and others went counter-clockwise. When cropping the vertical drift, I
had to adjust the relative locations of images in order to account for the direction in which
the image was stitched together. That is, I adjusted the signs of the x- and y-deltas to
account for the camera's varied rotation.
-
I noticed that the sample solution made intelligent decisions on when to crop based on the
slope of the vertical drift over the images. I implemented similar logic to crop
vertically only when the slope was negative (leaving a croppable area below the bottom row of
the lowest image). When the slope was positive I did not crop the image. This logic
worked well across all of my images. Only the tutoring images had positive slope.
-
I did not implement the following areas, but I have some thoughts on them:
-
Exposure differences - when adjacent images had different exposure, it made the
brightness fluctuate across the final stitched image. You can see this in the sample image, or in
the image taken in my office with the Kaidan head. I had two thoughts on how I could remove the
fluctuation.
-
The first is to minimize the sum of the squared difference in brightness across pixels that lie on top
of one another after the displacement has been applied, before blending the image. This can be
done by altering the brightness of the new image to blend by a constant across all pixels before
blending it. A good guess of an initial constant can be discovered by taking the average of the
brightness differences for overlapping pixels as a seed, and then adjusting that seed as
necessary. This has the effect of taking a brightness "threshold" at the overlapped area of the
first image, and then adjusting each subsequent image by that threshold plus any accumulated changes in
brightness across the panorama. Of course, "brightness drift" might accumulate with this method,
and it would be necessary to apply a similar correction as the one we applied for vertical drift to fix
this problem (i.e. distribute parts of the brightness error across all images equally). Another
downside of this approach is that the Lucas-Kanade step does not benefit from the brightness adjustment
(i.e. improved brightness adjustments don't improve the L-K motion estimation because they are applied
afterwards). One way to fix this is to put the algorithm in a loop and bootstrap it. In
other words, find the initial Lucas-Kanade estimate, correct brightness, and repeat until convergence.
-
The second is to automatically normalize object brightness across all images containing an image of an
object. Perhaps the target brightness could be the mean brightness of the object across all affected
images. Of course this would require object recognition. Two options would be (a) to get human
input to identify objects and (b) to use SIFT to identify common objects across images.
-
Ghosted object removal
- some of my images contained ghosted objects (e.g. the moving heads and the cars seen through the
window in the carpool image). One way to remove these objects would be to create a mask (e.g.
using the intelligent scissors tool from the previous project) and applying full transparency to
the area within that mask in the images containing the ghosts to be removed.
-
AutoStitch - I did not implement autostitch, but I did try it out. It is a
remarkable program. I was able to stitch a landscape scene of 18 images together into an
attractive panorama in under 30 seconds. I had two ideas of how this functionality could be used
for novel applications.
-
In the early 20th century, Stirling Moss drove 1,000 miles in 10 hours at an average speed of over 100
miles per hour on small Italian country roads in his Mercedes race-car. Find out more
here (listen to the last part of the show). The only way that he succeeded at this was
by practicing the route in advance, with his navigator taking detailed notes of what would come up
along the way. If cameras could be mounted along the sides of the car during the preparatory
drive, it would be possible to use autostitch to assemble two extremely long panoramas - one from each
side of the car. The driver could then train for the drive by using the panoramas to learn the
course. Similarly, the navigator could use the panoramas on the race day to tell the driver what
is coming up next.
-
Tourists who travel by train could set up time-lapse cameras pointing out the window. Autostitch
could be used to compose the images taken by such cameras into a mosaic, which could then be put on video
in a long, slow pan from one side to the other. Then, rather than watching slides in a slide
projector, relatives could fall asleep watching videos of the train ride.
See the project's homepage
here.