Average face ![]()
Eigenfaces ![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()

The number of correct recognitions is fairly low when there
are fewer than 7 eigenfaces. At around 10-11 eigenfaces,
the graph tends to plateau with only small variations afterwards. Obviously, the running time gets worse as we
increase the number of eigenfaces. Therefore, if running time is an issue, about
10 to 15 eigenfaces will suffice for bringing about
relatively decent results. This answer
will vary depending on the sample of training images, among other conditions.
was mistaken for
.
was mistaken for
.
The first of the above mistakes is reasonable. The expressions and facial features have many similarities. The second mistake is a bit questionable although both faces have similar curves (such as around the mouth) that may have contributed to the algorithm failing. The good thing is that most of the time, I noticed that the correct answer was usually in the top three or four choices.
Elf picture original
Cropped face 
Ok, this is spooky…
Original picture of me
Cropped
In my try to crop my face, I decided to take a risk. I wanted to know how the algorithm would deal with a face seen from a different angle when all the training images are frontal views. I used 0.1,0.3,0.1 for my parameters of min_scale, max_scale, and scale respectively. As you can see the cropped image is in an area on the wall. However, if you look closely at the cropped image, you will immediately see why the algorithm might have been fooled. To the lower left of the cropped image, there appears to be some darker, shadowy spots on the wall. These spots seem to form a ghostly looking face – one can see the rough outlines of two eyes, a nose and part of a mouth. So perhaps, given the frontal view of the training images, the algorithm is actually working.
Group Image 1 
In this group photo, my algorithm had problems finding the faces. It found Jon’s face rather well but completely missed the other two faces. Because of my overlap error, I had it identify many windows just to give the algorithm some more chances but it was fooled by the folds of the leather jacket.
Group Image 2 
Unfortunately, I wasn’t able to get rid of overlapping, but the face detection worked out fairly accurately. (Note: If you decide to test this with my code, please use n=15, not 3. Due to the overlapping, you will not get all three faces if you use too few windows.)
Group Image 3 Original 
Group Image 3 Result 
Above is a group image from my high school years. The darkness of the picture might have affected the results. Unfortunately, the algorithm was fooled by a patch of hair and only identified one face out of the five correctly.
Picture of just me: 0.1, 0.3, 0.1
Group 1: 0.5, 0.5, 0.01
Group 2: 0.5, 0.5, 0.01
Group 3: 0.3, 0.5, 0.05
As I mentioned before, group image 2 (the test image) worked fairly well (disregarding the overlapping). The elf image was also cropped really well. The other images were difficult to detect though. In some cases, it may simply have been the fact that I didn’t choose the window sizes wisely. Another reason could be that I didn’t account for lighter areas, something that I found really hard to do as you don’t want to just factor out light areas because they may very well be a face. We see that the features of the environment can also play a role if there are things in the image that potentially resemble a face. This is true for the picture of me (the shape on the wall) as well as the folds of the jacket in group image 1. The poor matching in group image 3 might be due to the lighting of the hair against the background of darkness. Also, I noticed that people who appear in the training images tend to have a better chance of being detected, as one would intuitively expect.
I implemented verify_face() and used the following line to test it:
main –verifyface smiling_cropped/27.tga base.user nonsmiling_cropped/27 eigenfaces.face 500
The algorithm correctly verified that
is the same person as

which is the result we wanted.