CSE/EE 576 Computer Vision  Spring 2007

Project 2:  Feature Detection and Matching

Date released: Wednesday, April 18 2007

Date due: Friday, May 4 2007 11:59pm
Late policy: 5% off per day late till Sunday 05/06/2007

Download Indri's slides (results from her project)


Synopsis

In this project, you will write code to detect discriminating features in an image and find the best matching features in other images.  Your features should be reasonably invariant to translation, rotation, and illumination, and (to a lesser degree) scale, and you'll evaluate their performance on a suite of benchmark images.  Scale is less important because it's a lot harder - however, any descriptor that is invariant to the other factors will be slightly scale invariant as well. You're not expected to implement SIFT!

To help you visualize the results and debug your program, we provide a working user interface that displays detected features and best matches in other images.  We also provide sample feature files that were generated using SIFT, the current best of breed technique in the vision community, for comparison.

Description

The project has three parts:  feature detection, description, and matching.

Feature detection

In this step, you will identify points of interest in the image using the Harris corner detection method.  

For each point in the image, consider a window of pixels around that point.  Compute the Harris matrix M for that point, defined as


where the summation is over all pixels (x,y) in the window w.   The weights w(x,y) should be chosen to be circularly symmetric (for rotation invariance).  A common choice is to use a 3x3 or 5x5 Gaussian mask. Ix and Iy indicate the partial derivatives in x and y.

To find interest points, first compute the corner response R


(Try k = 0.05)

Once you've computed R for every point in the image, choose points where R is above a threshold.  You also want it to be a local maximum in at least a 3x3 neighborhood.

Feel free to implement other features as well if you want.

Feature description

Now that you've identified points of interest, the next step is to come up with a descriptor for the feature centered at each interest point.  This descriptor will be the representation you'll use to compare features in different images to see if they match.

For starters, try using a small square window (say 5x5) as the feature descriptor.  This should be very easy to implement and should work well when the images you're comparing are related by a translation.

Next, try implementing a better feature descriptor.  You can define it however you want, but you should design it to be robust to changes in position, orientation (i.e., rotation), and illumination.  You are welcome to use techniques described in lecture (e.g., detecting dominant orientations, using image pyramids, using a disc instead of a square window), or come up with your own ideas. This is the main challenge of the assignment.

Feature matching

Now that you've detected and described your features, the next step is to write code to match them, i.e., given a feature in one image, find the best matching feature in one or more other images.

The skeleton code provided finds the SSD between all feature descriptors in a pair of images. The code declares a match between each feature and it's best match (nearest neighbor) in the second image.

For each feature in the first image, use SSD to find the best match (or no match) in the second image. The idea here is to find a good threshold for deciding if a match exists. There are two methods you might use to solve this problem: 

1.  use a threshold on the match score
2.  compute (score of the best feature match)/(score of the second best feature match), and threshold on that

Testing

Now you're ready to go!  Using the UI and skeleton code that we provide, or your own matlab code, you can load in a set of images, view the detected features, and visualize the feature matches that your algorithm computes. Matlab users may want to scope out the C++ code for tips on comparing the features.

We are providing a set of benchmark images to be used to test the performance of your algorithm as a function of different types of controlled variation (i.e., rotation, scale, illumination, perspective, blurring).  For each of these images, we know the correct transformation and can therefore measure the accuracy of each of your feature matches.  This is done using a routine that we supply in the skeleton code.

Everybody

  1. Download some image sets: leuven, bikes, graf, wall
    Included with these images are

    • SIFT feature files, with extension .key
    • database files used by the skeleton code, .kdb
    • homography files, containing the correct transformation between each pair of images, named HXtoYp where X and Y are the indeces of the images. For example, the transformation between img2.ppm and img3.ppm is found in H2to3p.
  2. Easier translation sequences can be found on
    http://cat.middlebury.edu/stereo/data.html

C++ Users

Matlab Users

You are a little more on your own here, you will probably find it useful to look at the skeleton code. The key element of testing is to take two images, generate lists of feature descriptors for them, and match them. For each feature match, see how far off the actual transformation is.  Look at the routine that applies the transformation matrix (applyHomography in C++ code).

What to Turn In

Download the .doc template for the report here. You are free to use other text processing tools like latex etc, however make sure that you have the same sections in your report.
The grading guidelines for project 2 is here.

In addition to your source code and executable, turn in a report describing your approach and results.  In particular:

This report can be a Word document, or pdf document.
Email Indri writeup and your source code by Friday, May 4 2007 11:59pm.. Zip up your report, source code and images into a file with your name as the name of the file , eg. JohnDoe.zip.

Extensions

For those who would like to challenge themselves, here is a list of suggestions for extending the program for a small amount of extra credit. You are encouraged to come up with your own extensions as well!