Panorama

CSE 455: Computer Vision

Project 2 Artifact

Joey Khwaja and Ian Johnson

Test sequence

We warped the images, found feature matches, and computed the transformation of the images by comparing the x and y values in one image to their counterparts in a second image, and blended them by adding the result of a blending function applied to the individual pixels in each image, this fuction had linear falloff at both edges. We then divided by the weight in order to get the final intensity of each pixel.

Our blend was not fully implemented such that the edges of images often had missing content on the left which manifested as a gradient to black and so we had to use the solution binaries to produce our artifacts. I believe this is because I did not multiply the weight by the original alpha value and therefore did not preserve the original weights.

Our implementation was roughly normal.

"stitch4" by Joey Khwaja and Ian Johnson, CSE 455 Winter 2012 (full size)

Sequence with Kaiden panorama head

Because it was raining we had to use an umbrella to keep the camera dry, and had to be careful to keep it out of the photographs but we were successful. Using the Tripod to keep the camera rotating around it's center was successful.

It was difficult to keep the lighting values the same, and so we had to use photoshop to adjust them in order to get our final artifact. It also does not line up at the edge because the ground was muddy and not level.

We used an umbrella.

"Grove" by Joey Khwaja and Ian Johnson, CSE 455 Winter 2012 (full size, 360° viewer)

Sequence taken by hand

Using a static and round area and keeping a steady hand I was able to get a relatively good panorama by hand

I had to wait for people to move out of the way between each shot however.

I used an already circular area.

"EE;CSE" by Joey Khwaja and Ian Johnson, CSE 455 Winter 2012 (full size, 360° viewer)

ROC comparison of our code vs. SIFT

These plots are meant to determine the optimal threshold for matching features.

ROC comparison for Yosemite - MOPS + SSD (RED), MOPS + ratio test (GREEN), SIFT + SSD (BLUE), SIFT + ratio test (VIOLET)
ROC comparison for graf - MOPS + SSD (RED), MOPS + ratio test (GREEN), SIFT + SSD (BLUE), SIFT + ratio test (VIOLET)

Both plots show that the ratio test is only better with SIFT, which means it has more false positives than SSD with MOPS.

Threshold comparison for Yosemite - MOPS + SSD (RED), MOPS + ratio test (GREEN)
Threshold comparison for graf - MOPS + SSD (RED), MOPS + ratio test (GREEN)

The yosemite images produced a clear ideal threshold to use, while the graf images produce a less defined threshold.