Here is the average face and the ten eigenfaces from the first experiment:
average face     eigen0     eigen1     eigen2     eigen3     eigen4     eigen5     eigen6     eigen7     eigen8     eigen9


Here are the results from the first set of experiments:
chart

As you can see, the accuracy generally increases with the number of eigenvectors, but then plateaus.  I believe this is the result of overfitting the training data:  
after a certain point, the space spanned by the eigenvectors is too high-dimensional to accurately discriminate among details of faces.  Probably the best
way to decide how many eigenvectors to use is to look at the actual eigenvalues to see if there is a big dropoff somewhere; otherwise, running an experiment
like this one would be a good way.


Here are a few errors from my face detection:
downey-smile  downey confused with eckart eckart-nonsmile

hoyt-smile     hoyt confused with gauthier gauthier-nonsmile

In both cases, the right face was in the top four.


I found Aseem's face in this image:
aseem

and got this:
aseemface
at a scale of 0.45.

I don't have a digital picture of myself.  I found around guy on google images, and cropped his portrait:
randomguy

to get this:
guyface
at a scale of 0.28.

I marked the group1 image to find (min_scale = 0.85, max_scale = 1.1, step = 0.05):
group1face


and I also found a random group of girls on google images, who turned out to be hard to detect:
group2face

I used a min_scale of 0.68, max_scale of 0.90, and step of 0.02.

I couldn't get the recognition any better than this.  The texture areas and the one girl's neckace kept fooling it, as did the differences in orientation of the faces,
I think.  Also, maybe if the training data had more women ...