Computer Vision (CSE 576), Spring 2003

Project 3: Eigenfaces

Assigned: Monday, May 5, 2003
Due: Sunday, May 18, 2003 (by 11:59 pm)

Synopsis

In this project you will create a face recognition system. The program reduces each face image to a vector, then uses Principal Component Analysis (PCA) to find the space of faces. This space is spanned by just a few vectors, which means each face can be defined by just a set of coefficients weighting these vectors.

You are given a skeleton program that provides a sophisticated command-line interface as well as most of the image and vector classes and operations you will need. Your job will be to fill in the functions that do PCA, projection into face space, determining if a vector represents a face, verifying a user based on a face, finding a face match given a set of user face information, and finding the size and position of a face in an image.

Principal Component Analysis (PCA)

PCA is a technique by which we reduce the dimensionality of data points. For example, consider the space of all 20 by 30 pixel grayscale images. This is 600 dimensional space because 600 data points are required to represent the values of the 600 pixels. But suppose we only consider images that are valid faces. Such images may lie in a small subspace of the set of all images. If this subspace is a hyperplane spanned by k vectors, then we can represent any point that lies on this plane with only a set of k coefficients.

To use PCA to do pattern recognition you first take a representative sample of data points and find the subspace with the desired dimensionality that best fits the sample data. This step needs to be done only once. Once it has been done, you can determine if a new point is a valid data point by projecting it onto the subspace and measuring the distance between the point and its projection.

To use PCA for face recognition we must represent each face image as a vector of pixel values. To generate this vector the face image must be cropped and scaled, and its intensity must be normalized. The cropping can either be done by hand or by searching images for faces. This second option is only possible once the face recognition algorithm has already been trained with some input faces.

Input Data

The input data you will be using is divided into several sets. They are included as folders in the skeleton code zip file:

Skeleton Code

eigenfaces_skeleton.zip: Skeleton code
files.zip: All the images. You should unzip them and put them with the skeleton code.

The skeleton code is large, but please take the time to get some familiarity with the classes and methods it provides. There is a lot of useful functionality included, like vector and image operations, which would take a lot of time to write yourself. The program uses a simple command line interface, and it is simple to add your own functionality in case you want to tackle some extra credit, or add commands to facilitate your experimentation. At the minimum you will only need to modify two files, faces.cpp, and eigfaces.cpp, but you will still need to understand the contents of most of the other classes.

If you are using linux, click here.

Classes

Here is a description of the classes in the skeleton code. You will need to read this page to understand the organization of the code. It also gives information on numerous methods that you will need to call in your code, so please read it very carefully.

Functions to code

All the required work can be done in eigfaces.cpp and faces.cpp. The functions you must fill in are members of classes Faces and EigFaces. A Faces object stores a set of user faces loaded from images. EigFaces is derived from Faces. An EigFaces object stores a set of eigenfaces as well as the average face.

Most of the methods you will have to implement are called from main.cpp, so you should study main.cpp closely to see how these methods are used.

Each of the items below have an upper bound on the number of lines of code needed, not including comments. If you have many more lines of code for a given step than the upper bound, then you may be doing something wrong, or at least making more work for yourself. Be sure to look in vector.h, image.h, and face.h to see what useful functions are already provided for you.

Note that points will not be taken off for making your code longer than necessary, as long as it is correct and well coded. However, you should try your best not to go too far over the upper bound, as it means unnecessary work for you and will make your code more difficult to read and harder to grade.

  1. Implement Faces::eigenFaces. This method uses PCA to generate eigenfaces from the set of user faces. It takes two parameters, n, the number of eigenFaces to compute, and results, an EigFaces reference parameter where the n computed results should be stored. Recognition slides 17 to 20 show how this computation is done. Use Jacobi::jacobi() to find the eigenvectors, almost exactly like you did in project 2. This time instead of just looking for one eigenvector, you are looking for the top n eigenvectors, so you will need to sort the eigenvectors by eigenvalue. The skeleton provides a function sortEigenValues which sorts the eigenvalues and returns an ordering so you can find the positions of the top eigenvectors in the matrix. However, you can make your own sort routine if you prefer. Once you've computed the ordering, put the top n eigenfaces into results.

    Hints: You will need to store the average face in the results by calling setAverage. The matrixes and vectors jacobi uses always start at one, so you will need to adjust for this when going between these and the Vector objects used everywhere else in the program.
    Here is the interface to the jacobi routine:
    
    void jacobi(double **a, int n, double d[], double **v, int *nrot);
    
    Computes all eigenvalues and eigenvectors of a real symmetric matrix
    a[1..n][1..n].  On output, elements of a above the diagonal are
    destroyed.  d[1..n] returns the eigenvalues of a. v[1..n][1..n] is a
    matrix whose columns contain, on output, the normalized eigenvectors
    of a.  nrot returns the number of Jacobi rotations that were required.
    

    Lines required: 36
  2. Implement EigFaces::projectFace. This method projects a face onto the face space to generate a vector of k coefficients, one for each of the k eigenfaces. The parameters are Face, the face to be projected, and coefficients, a reference parameter in which you must store the coefficients computed.

    Lines required: 5
  3. Implement constructFace. This method constructs a face from a vector of coefficients. The coefficients are given in a parameter and the constructed face should be stored in the reference parameter result. You should use only the first k coefficients to produce the reconstruction.

    Lines required: 6
  4. Implement isFace. This method decides whether an image is a face. It works by projecting the face onto face space and computing the Mean Square Error (MSE) between projection and original. The parameter max_reconstructed_mse is the threshold to use in making the decision. Please return the computed MSE in the reference parameter mse.

    Lines required: 7
  5. Implement verifyFace (extra credit). This method decides if a face is a given user's, given coefficients computed from a different face image from that user. It works by computing the MSE between coefficients computed by projecting the face onto face space and the user's coefficients. Return the MSE in the mse reference parameter

    Lines required: 4
  6. Implement recognizeFace. This method is like verifyFace, but instead sorts the userbase by closeness of the match with the face. The userbase is simply a set of names matched with coefficient vectors. Set the mse for each user in the userbase, then sort it. The best match will then be the first user in the userbase.

    Lines required: 8
  7. Implement findFace. This method searches an image and finds the first n best faces. A struct FacePosition is included to help you out. You can use std::list to keep a list of the top best face positions. Whenever a position is found that is better than the last position in the list, insert it into the proper place in the list such that the list stays sorted, and remove the last element in the list. The exception is if there is already a face position in the list that overlaps the one you just found. In that case you must choose the one with the better face and throw the other away.

    If the parameter crop is true, then you should return the image cropped to the best face in result. Note that this result must be the same scale and resolution as the original image. If crop is false, then draw bright green boxes around each found face in the original image and return this in result. You can assume that crop will only be true when n is one.

    The function should search various scales by scaling the input image for every scale between min_scale and max_scale (simply call the resample method to scale an image). The step parameter controls the distance between scale factors searched. For example, with arguments min_scale=0.1, max_scale=0.5, and step=0.05, the image scales searched will be 0.1, 0.15, 0.2, 0.25, 0.3, 0.35, 0.4, 0.45 and 0.5. You should return the best x and y face position and scale in the reference parameters provided.

    Hints: To minimize processing time, only consider faces completely inside the image. Put the scaling in the outermost loop so you only have to resample the image once for every scale factor. You might want to construct and output a debug image that shows the score for every pixel position. Since your scores probably won't happen to be between 0.0 and 255.0, normalize the debug image before you save it.

    You may have to do some special processing to prevent the algorithm from getting fooled by low-texture areas or areas close to face space but far from the face mean. I found that multiplying the MSE by the distance of the face from the face mean and then dividing by the variance of the face works pretty well. Maybe you can come up with something better than this for extra credit.

    A good debugging technique is to find the correct face scale yourself, and then run the search for only that scale. Once you get it working like that, then move on to getting your code working for a range of scales.

    This is a complicated problem, so don't expect to get perfect results. However, you should be able to find the faces in the given group1.tga image; after all, the faces were used to form the eigenspace! Your code will be tested on this.

    Lines required: 78

Hints, Tips, Requirements, Warnings

Here are general tips, requirements, and things to watch out for that may come in handy as you are coding your project or running your experiments.

Experiments

You must complete all of the following experiments to test your code. The results of each experiment, including the answers to all the questions, should be written in your Write-up, along a discussion of why you think the experiment turned out the way it did, as well as any unexpected results.

Note that some of the experiments may require you to run a command over and over again. One way to accomplish this is to use a batch file. Or, if you wish, you can easily add your own command that takes the place of a string of commands. See the description of class Main above to find out how to do this.

Testing recognition with cropped class images

Procedure

  1. Use the cropped, non-smiling students (in class_nonsmiling_cropped) to compute 10 eigenfaces.  Show the average face and eigenfaces.
  2. Use the same set of images to compute a userbase.
  3. Have the program recognize the cropped, smiling students.
    1. Experiment with the number of eigenfaces used.  Try using the mean face plus 1 through 33 eigenfaces,  at a granularity of 2 (this means alot of experiments!  Writing a little script will make this alot easier!).  
    2. Plot the number of faces correctly recognized versus the number of eigenfaces used.

Questions

  1. Describe the trends you see in your plots.   Discuss the tradeoffs; how many eigenfaces should one use?  Is there a clear answer?  
  2. You likely saw some recognition errors in step 3; show images of a couple.  How reasonable were the mistakes?  Did the correct answer at least appear highly in the sorted results?  


Cropping and finding faces

Procedure

  1. Use the 10 eigenfaces file computed in the previous problem.
  2. Use your program to crop the aseem.tga image. Use min_scale,max_scale, step parameters of .45, .55, .01. You should be able to successfully crop the image; if not, work on your code!
  3. Find a digital picture of yourself; if you really don't have one, use any  portrait on the web.  Use your program to crop the picture.
  4. Experiment with min_scale, max_scale, and step parameters to find ones that work robustly and accurately.
  5. Find the faces in two different photos (use the crop=false option)
    1. First, try the group1.tga image given; you should be able to find all 3 faces. If not, work on your code!
    2. Find and try another group photo.  This could be of family and/or friends, or one from the web.  The photo should have at least four faces.

Questions

  1. What min_scale, max_scale, and scale step did you use for each image?  
  2. Did your attempt to find faces result in any false positives and/or false negatives?  Discuss each mistake, and why you think they might have occurred.


Write-up

Your write-up should be an HTML document (here are some tips); the main document should be entitled index.html, and at the top level in the artifact turnin directory.   Make sure you have all images you used for face finding and cropping, and your plots!  Also remember the average face and eigenfaces in the first experiment.  Make sure any images that need it are normalized so they can be seen clearly. Finally, make sure to fully document any extra credit on this web page.

Feel free to put your write-up in multiple files if you want, but make sure that they are all accessible via links from the main page.

Extra Credit

Implement morphing; given two images, compute an animation that shows one being transformed continuously into the other, using the eigenfaces representation. Also, try transforming an image "away" from another one, to exaggerate differences between two faces (make a face more male, more female...). Make a video from your morphed faces, e.g. using Adobe Premier.


Try using eigenfaces to recognize images other than faces or to recognize animal faces.


 Implement the following speedup.  This is a good extra credit item to try early on, as it will reduce the time taken to do the experiments!

Implement verifyFace as described in the list of methods to implement above. Perform the following experiment.


Extend eigenfaces to handle changes in illumination and/or viewpoint. You can find links to additional face databases that may contain such effects here (look towards the bottom) and here.


Try using kernel PCA, or some non-linear variant of PCA.   A good starting point is the following survey paper. Compare the results using standard PCA.


Implement real-time face detection (multiple frames per second). A good reference to begin with is here.