Panorama of GreenLake

CSE 455: Computer Vision

Project 2 Artifact

Skyler Peterson and Tim Plummer

Test sequence

The test image shown below was generated completely from our code without adjustments.
Overall feature matching and blending was accomplished with high accuracy as compared
with the given solution image.

There are some tiny translations in the x and y direction on the order of a pixel,
but the image quality is equal to that of the given image so we are to believe it is a
consistent transformation with no apparent negative side affects.

For the test image, nothing was done out of the ordinary.

"Test Image Near the HUB" by Skyler Peterson and Tim Plummer, CSE 455 Winter 2012 (full size, 360° viewer)

Sequence with Kaiden panorama head

Very little worked well as far as the physics of the image is concerned. The best part was
just how nice this location was.

Considering that this image was of a lake on a cloudy day, we found that the turbulent
waters and clouds created a lot of unmatchable random features. About 1/3 of the original images
did not match at all and had to be individually run through the alignPair function multiple times
in order to find good starting x, y translations.

Since this was our artifact up for voting, we had more patience individually changing the pair list
file so as to get a high quality image.

"Green Lake on the Docks" by Skyler Peterson and Tim Plummer, CSE 455 Winter 2012 (full size, 360° viewer)

Sequence with Kaiden panorama head Bonus!

This was our first image series and found that it did very well on its own. The image below
is the first result we had when running the code and were happy enough to keep it.

There are some ghosted artifacts such as the statue in the foreground and center frame.

Nothing non-standard was done. This was probably our most "standard" panoramic

"CSE 6th Floor Balcony" by Skyler Peterson and Tim Plummer, CSE 455 Winter 2012 (full size, 360° viewer)

Sequence taken by hand

This image worked surprisingly well considering we just kind of took one series and didn't test
it right away. The given image is exactly the output produced with 2000 RANSAC sequences per image pair.
The success of this image may be mostly due to the high number of features found, higher than any other panorama.

There are a couple of minor blurs in the image and one point that appeared to have suffered from
rotational problems.

Nothing Non-Standard here! Move along please.

"CSE Courtyard" by Skyler Peterson and Tim Plummer, CSE 455 Winter 2012 (full size, 360° viewer)

Harris images (Gamma multiplied by two)

Yosemite Harris operator image Graf Harris operator image

ROC comparison of our code vs. SIFT

Briefly introduce what you're showing in this section.

ROC comparison for Yosemite

The graph on the left shows how well the feature matching implementation does compared to the SIFT implementation using the Yosemite image set. It also shows how the ratio test gives a more optimistic score than the SSD test in both cases, although the difference is more dramatic in the case of the MOPS code.

ROC comparison for graf

This is a similar graph to the one above, for the graf image set. The MOPS code did not work as well for these images as it did for the Yosemite image set, but the curve is still fairly high above the diagonal, and therefore gives a decent result.

Extra Credit

An additional computeFeatures option was written called "dogComputeFeatures", which computes the difference of gaussians in order to find good feature locations. This method of feature detection also finds edges in addition to corners, which increases the number of feature points, but doesn't always find the best features to match against.