Assigned: Thursday, January 24,
2008
Due: Wednesday, Februrary 6, 2008 (by
11:59pm)
Demo: Thursday,
February 7, 2008
Artifact Due:
Friday, Februrary 8, 2008 (by 11:59pm)
Project Head TA: Ryan Kaminsky (send your
questions here first!)
Project Secondary TA: Noah Snavely
In this project, you will implement a system to combine a series of photographs into a 360° panorama (see panorama below). Your software will detect discriminating features in the images, find the best matching features in the other images, automatically align the photographs (determine their overlap and relative positions) and then blend the resulting photos into a single seamless panorama. You will then be able to view the resulting panorama inside an interactive Web viewer. To start your project, you will be supplied with some test images and skeleton code you can use as the basis of your project and instructions on how to use the viewer.
Because this project is more extensive you can work in groups of two. Use this link to register your group on the grouper tool.
This project can be thought of as two major components:
In addition to your source code and executables, turn in a web page describing your approach and results. In particular:
Feature Detection and Matching
Panorama Mosaic Stitching
This portion of the web page should containin the following:
The web-page should be placed in the project2/artifact directory along with all the images in JPEG format. If you are unfamiliar with HTML you can use any web-page editor such as FrontPage, Word, or Visual Studio 7.0 to make your web-page. The KompoZer HTML editor is easy to use and highly recommended. Here are some webpage design tips.
In this component, you will write code to detect discriminating features in an image and find the best matching features in other images. Because features should be reasonably invariant to translation, rotation, illumination, and scale, you'll use the Multi-Scale Oriented Patch (MOPS) descriptor and you'll evaluate its performance on a suite of benchmark images. As part of the extra credit you'll have the option of creating your own feature descriptors. If there are enough entries we'll rank the performance of features that students in the class come up with, and compare them with the current state-of-the-art.
For the second part of the assignment, you will apply your features to automatically stitch images into a panorama.
To help you visualize the results and debug your program, we provide a working user interface that displays detected features and best matches in other images. We also provide sample feature files that were generated using SIFT, the current best of breed technique in the vision community, for comparison.
This component has three parts: feature detection, description, and matching..
In this step, you will identify points of interest in the image using the Harris corner detection method. The steps are as follows (see the lecture slides/readings for more details) For each point in the image, consider a window of pixels around that point. Compute the Harris matrix H for that point, defined as
where the summation is over all
pixels p in the window. The weights
should
be chosen to be circularly symmetric (for rotation
invariance). A common choice is to use a 3x3 or
5x5
Gaussian mask.
Note that H is a 2x2 matrix. To find interest points, first compute the corner strength function
Once you've computed c for every point in the image, choose points where c is above a threshold. You also want c to be a local maximum in at least a 3x3 neighborhood.
Now that you've identified points of interest, the next step is to come up with a descriptor for the feature centered at each interest point. This descriptor will be the representation you'll use to compare features in different images to see if they match.
For this project you'll use the MOPS descriptor.
<Insert MOPS information here>.
Now that you've detected and described your features, the next step is to write code to match them, i.e., given a feature in one image, find the best matching feature in one or more other images. This part of the feature detection and matching component is mainly designed to help you test out your feature descriptor. You will implement a more sophisticated feature matching mechanism in the second component when you do the actual image alignment for the panorama.
The simplest approach is the following: write a procedure that compares two features and outputs a score saying how well they match. For example, you could simply sum the absolute value of differences between the descriptor elements. Use this to compute the best match between one feature and a set of other features by evaluating the score for every candidate match. You can optionally explore faster matching algorithms for extra credit.
Your routine should return NULL if there is no good match in the other image(s). This requires that you make a binary decision as to whether a match is good or not. Implement two methods to solve this problem:
1. use a threshold on the match score
2. compute (score of the best feature match)/(score of the
second best feature match), and threshold that
Now you're ready to go! Using the UI and skeleton code that we provide, you can load in a set of images, view the detected features, and visualize the feature matches that your algorithm computes.
We are providing a set of benchmark images to be used to test the performance of your algorithm as a function of different types of controlled variation (i.e., rotation, scale, illumination, perspective, blurring). For each of these images, we know the correct transformation and can therefore measure the accuracy of each of your feature matches. This is done using a routine that we supply in the skeleton code.
You should also test the matching against the images you will take for your panorama (described in next component).
Follow these steps to get started quickly:
Install FLTK.
If you unzip FLTK to somewhere other than C:\, you'll have to change
the project settings to look for the include and library files in the
correct location. If you're using Linux, you don't need to download
FLTK, since you can just use the libraries in uns/lib/.
Download the skeleton code here.
This code should work under both Windows and Linux.
Download some
image sets: graf,
leuven,
bikes,
wall
Included with these images are some SIFT feature files and image
database files.
After compiling and linking the skeleton code, you will have an executable cse455. This can be run in several ways:
cse455
with no command line options starts the GUI. Inside the GUI, you can
load a query image and its corresponding feature file, as well as an
image database file, and search the database for the image which best
matches the query features. You can use the mouse buttons to select a
subset of the features to use in the query.
Until you write your feature matching routine, the features are matched by minimizing the Euclidean distance between feature vectors.
cse455 computeFeatures imagefile featurefile
[featuretype]
uses your feature detection routine to compute the features for imagefile,
and writes them to featurefile. featuretype
specifies which of your types of features (if you choose to
implement another feature for extra credit) to compute.
cse455 testMatch featurefile1 featurefile2
homographyfile [matchtype]
uses your feature matching routine to match the features in featurefile1
with the features in featurefile2. homographyfile
contains the correct transformation between the points in the two
images, specified by a 3-by-3 matrix. matchtype
specifies which of your (at least two) types of matching algorithms to
use.
cse455 testSIFTMatch featurefile1 featurefile2
homographyfile [matchtype]
is the same as above, but uses the SIFT file format.
cse455
benchmark imagedir [featuretype matchtype]
tests your feature finding and matching for all of the images in one of
the four above sets. imagedir is the directory
containing the image (and homography) files. This command will return
the average pixel error when matching the first image in the set with
each of the other five images.
We have given you a number of classes and methods to help get you started. The only code you need to write is for your feature detection methods and your feature matching methods. Then, you should modify computeFeatures and matchFeatures in the file features.cpp to call the methods you have written.
Here is a list of suggestions for extending the program for extra credit. You are encouraged to come up with your own extensions as well!
Project2.exe is a command line program that requires arguments to work properly. Thus you need to run it from the command line, or from a shortcut to the executable that has the arguments specified in the "Target" field of the shortcut properties.
To run from the command line, click the Windows Start button and select "Run". Then enter "cmd" in the "Run" dialog and click "OK". A command window will pop up where you can type DOS commands. Use the DOS "cd" (change directory) command to navigate to the directory where Project2.exe is located. Then type "project2" followed by your arguments. If you do not supply any arguments, the program will print out information on what arguments it expects.
Another way to pass arguments to a program is to create a shortcut to it. To create a shortcut, right-click on the executable and drag to the location where you wish to place the shortcut. A menu will pop up when you let go of the mouse button. From the menu, select "Create Shortcut Here". Now right-click on the short-cut you've created and select "Properties". In the properties dialog select the "Shortcut" tab and add your arguments after the text in the "Target" field. Your arguments must be outside of the quotation marks and separated with spaces.
You can run the skeleton program from inside Visual Studio, just like you could with the last project. However, you will need to tell Visual Studio what arguments to pass. Here's how:
You will be checking out equipment (camera, tripod, and Kaidan head) in groups two groups (four individuals total). Each group is responsible for writing all code on their own, but only one artifact need be turned in per group. Remember to bring extra batteries with you, these cameras drain batteries.
Skip this step for the test data. Its camera parameters can be found in the sample commands in stitch2.txt, which is provided along with the skeleton code.
Camera |
resolution |
focal length |
k1 |
k2 |
Canon Powershot A10, tag CS30012716 |
480x640 |
678.21239 pixels |
-0.21001 |
0.26169 |
Canon Powershot A10, tag CS30012717 |
480x640 |
677.50487 pixels |
-0.20406 |
0.23276 |
Canon Powershot A10, tag CS30012718 |
480x640 |
676.48417 pixels |
-0.20845 |
0.25624 |
Canon Powershot A10, tag CS30012927 |
480x640 |
671.16649 pixels |
-0.19270 |
0.14168 |
Canon Powershot A10, tag CS30012928 |
480x640 |
674.82258 pixels |
-0.21528 |
0.30098 |
Canon Powershot A10, tag CS30012929 |
480x640 |
674.79106 pixels |
-0.21483 |
0.32286 |
test images |
384x512 |
595 pixels |
-0.15 |
0.0 |
(Note: If you are using the skeleton software, save your images in (TrueVision) Targa format (.tga), since this is the only format the skeleton software can currently read. Also make sure the aspect ratio of the image (width vs. height) is either 4:3 or 3:4 (480x640 will do) which is the only aspect ratio supported by the skeleton software.)
Note: The skeleton code includes an image library, ImageLib, that is fairly general and complex. It is NOT necessary for you to peek extensively into this library! We have created some notes for you here.
[TODO] Compute the inverse map to warp the image by filling in the skeleton code in the warpSphericalField routine to:
(Note: You will have to use the focal length f estimates for the half-resolution images provided above (you can either take pictures and save them in small files or save them in large files and reduce them afterwards) . If you use a different image size, do remember to scale f according to the image size.)
To do this, you will have to implement a feature-based translational motion estimation. The skeleton for this code is provided in FeatureAlign.cpp. The main routines that you will be implementing are:
int
alignPair(const FeatureSet
&f1, const
FeatureSet
&f2, const
vector<FeatureMatch>
&matches, MotionModel
m, float
f, int nRANSAC, double
RANSACthresh,
CTransform3x3& M);
int
countInliers(const
FeatureSet
&f1, const
FeatureSet
&f2, const
vector<FeatureMatch>
&matches, MotionModel
m, float
f, CTransform3x3 M, double RANSACthresh, vector<int>
&inliers);
int
leastSquaresFit(const
FeatureSet &f1, const
FeatureSet &f2, const
vector<FeatureMatch>
&matches, MotionModel
m, float f, const vector<int>
&inliers, CTransform3x3& M);
AlignPair takes two feature sets, f1 and f2, the list of feature matches obtained from the feature detecting and matching component (described above), and a motion model (described below), and estimates and inter-image transform matrix M. It is therefore similar to the evaluateMatch function in Project 1, except that now the transformation is being computed rather than evaluated. For this project, the enum MotionModel only takes on the value eTranslate.
AlignPair uses RANSAC (RAndom SAmpling Consensus) to pull out a minimal set of feature matches (one match for this project), estimates the corresponding motion (alignment) and then invokes countInliers to count how many of the feature matches agree with the current motion estimate. After repeated trials, the motion estimate with the largest number of inliers is used to compute a least squares estimate for the motion, which is then returned in the motion estimate M.
CountInliers is similar to evaluateMatch
except that rather than computing
the average Euclidean distance, the number of matches that have a
distance
below RANSACthresh is computed.
It also returns a
list of inlier match ids.
LeastSquaresFit computes a least squares estimate for the translation using all of the matches previously estimated as inliers. It returns the resulting translation estimate in the last column of M.
[TODO] You will have to fill in the missing code in alignPair to:
[TODO] Then, resample each image to its final location and blend it with its neighbors (AccumulateBlend, NormalizeBlend). Try a simple feathering function as your weighting function (see mosaics lecture slide on "feathering") (this is a simple 1-D version of the distance map described in [Szeliski & Shum]). For extra credit, you can try other blending functions or figure out some way to compensate for exposure differences. In NormalizeBlend, remember to set the alpha channel of the resultant panorama to opaque!
[TODO] Crop the resulting image to make the left and right edges seam perfectly (BlendImages). The horizontal extent can be computed in the previous blending routine since the first image occurs at both the left and right end of the stitched sequence (draw the “cut” line halfway through this image). Use a linear warp to the mosaic to remove any vertical “drift” between the first and last image. This warp, of the form y' = y + ax, should transform the y coordinates of the mosaic such that the first image has the same y-coordinate on both the left and right end. Calculate the value of 'a' needed to perform this transformation.
You may also refer to the file
stitch2.txt provided along with the skeleton code for the appropriate command line syntax. This command-line interface allows you to debug each stage of the program independently.
You can use the test results included in the images/ folder to check whether your program is running correctly. Comparing your output to that of the sample solution is also a good way of debugging your program.
Here is a list of suggestions for extending the program for extra credit. You are encouraged to come up with your own extensions. We're always interested in seeing new, unanticipated ways to use this program!
Last modified on
April 11, 2005