CSE576 Project 3: Eigenfaces
Harlan Hile

Here is the nonsmiling face space with 25x25 images, 10 eigenvectors, starting w/ the average face, or get the data file

Here is a graph of using the nonsmiling eigenvectors to recognize smiling faces, generated with this script that does everything.

Question 1: Increasing the number of vectors used to describe the face space increases the recognition ability up to a point, and then levels out. A reasonable value seems to be about half the number of input images, but this likely depends on what the input is like and how similar the images to be recognized are.
Question 2: Looking at the graphs, most faces were matched correctly, and nearly all were matched in the top three, but this still leaves some that were not selected to match the top 3. Running specific examples to see where errors occured, there are a few faces that were poorly recognized, not even in the top half. These were song, downey, hahn, and su. Most of the rankings improve as more eigenvectors are used. Taking su as an example, the smiling and non-smiling images look surprisingly similar, but it seems the brighter highlight on the glasses in the first causes the normalized non-smiling image to be darker than the smiling one, which means in face space that the smiling su looks more like a lighter person than it does the non-smiling su. Similarly, downey's extra sparkly teeth make the normalized image darker.

It would be nice if we could normalize for lighting rather than range, unfortunately lighting changes a face in basically the same way skin tone does. However, looking at the face space, the first two eigenvectors seem to roughly correspond to skin-tone, and we could simply ignore those first two values when ranking recognition matches. This matches all but one in the top 3 for reasonable numbers of eigenvectors, but otherwise behaves similar to the original (see graph below). This approach only works because there is some understanding of the projection space, so decisions can be made about what to throw out.


Here is Aseem's picture cropped:

Here is a picture of me cropped using range 0.24,1,0.04 (with the match at 0.56): from this image. TGA files also available if you look in the directory.

Here is the sample group picture marked using range 0.24,1,0.04 (best matches at scale 0.96):

Here is a group picture marked using range 0.35,0.85,0.04 (matches in the range 0.59-0.83)

In order for this to work well, extra information was necessary. The mostly straight-on faces matched well, but the angled faces were not found. The easiest source of extra information was color, so I imposed a penalty on regions that were far from reasonable skin hues. Brighter green are better matches. Results without this additional penalty are available here.

Question 1: We're probably only interested in faces that take up at least 5% of the image width, and at most 50%, so that should give a scaling range given an image resolution. Thats a large range, about 0.2 to 1, but is about all I can guess without other prior knowledge. If we know the number of faces in the image, we can assume this reduces the maximum face size from 50% to around 1/number. Stepsize should change face size about one pixel, so 0.04 should be reasonable. Experiments were done with these rules in mind (although run much faster with smaller ranges), and the exact ranges used are noted with each picture.
Question 2: False postives often arose in low-texture regions, despite including distance from average face and variance into the error metric. Matches were also often found where there was a high gradient in the lower portion, which perhaps resembled a mouth. Faces were often missed if they did not match what was in the training data. Different lighting, for instance outdoor lighting with strong shadows, makes faces look much different. Different angles, faces that are not looking straight at the camera, are also not included in the training data, so are poorly recognized. Examples of this can be seen in the group picture I included, but by including color information in the search, these faces can be found too.


Extras: