Eigenfaces Artifact

Andy Hou
CSE 455 Computer Vision
Project 4


Average Face + 10 Eigenfaces



Faces Recognized vs Number of Eigenfaces Used



Common Recognition Mistakes

Face 22 was mistaken for Face 1.

Face 1 was mistaken for Face 4.

Face 20 was mistaken for Face 9.

Face 4 was mistaken for Face 13.

Face 2 was mistaken for Face 15.

Face 14 was mistaken for Face 20.

Face 23 was mistaken for Face 21.


Testing Recognition Discussion
Increasing the number of eigenfaces used increases the number of faces correctly recognized. The benefits seem to taper off at around 10 eigenfaces. After that, increasing the number of eigenfaces does not really affect the number of faces recognized. Since more eigenfaces also means increased computation time and storage space, it is probably best to use around 10 eigenfaces in order get the maximum benefit at the minimum cost.

The face pairs that were consistently misrecognized do seem to look similar, especially considering the fact that the faces were scaled down to 25x25 pixels. The face pair that seems most unlikely to be confused to me seems to be faces 2 and 15. The correct face did usually appear high in the list of sorted results.


Cropping elf.tga
Original Image:


Scoremap for scale=0.50 (darker areas are better matches):


Result Image (min_scale=0.45, max_scale=0.55, step=0.01):



Cropping me.tga
Original Image:


Scoremap for scale=0.50 (darker areas are better matches):


Result Image (min_scale=0.40, max_scale=0.54, step=0.02):



Finding Faces in IMG_0031.tga
Scoremap for scale=0.50 (darker areas are better matches):


Result Image (min_scale=0.40, max_scale=0.60, step=0.02):



Finding Faces in group.tga
Scoremap for scale=1.00 (darker areas are better matches):


Result Image (min_scale=0.86, max_scale=1.06, step=0.02):



Finding Faces in family.tga
Scoremap for scale=0.80 (darker areas are better matches):


Result Image (min_scale=0.70, max_scale=0.80, step=0.02):



Finding Faces Discussion
My face finding method uses a combination of 2 scores. The first score is the MSE between the projection of a face around a position and the actual image around that position. This is basically the value returned by the isFace method. Areas of the image with a low first score are more likely to be face paterns. The second score is the MSE between pixel colors around a position and the average skin color. Areas of the image with a low second score are more likey to be skin colors. These two scores are normalized and then multiplied together to generate a score map for an image. Darker areas in the score map coorespond to lower MSE and thus better matches.

I was surprised at how well face finding worked on the group image. Every one of the 28 faces was correctly identified with no false positives. The only problem is that 2 of the boxes are off center of their faces. On the other hand, it did pretty badly on the family image. It only identified two faces well. It seemed to prefer necks to faces in several instances. It also identified two images of arms and one of roof tiles. I think part of the problem is that I based the average skin color off the group image. Since it has relatively darker skin tones than the family image, the face finder preferred areas of darker skin in the family image, i.e. the necks. Another problem is that the eigenfaces were generated from the faces of students in the class, who also appeared in the group image. Since the faces in the family image are of different people, they would have higher MSE using those eigenfaces. But then again, it didn't even find my own face in the family image, so this might not be entirely correct.