Here is the average face and the ten eigenfaces from the first experiment:
Here are the results from the first set of experiments:
As you can see, the accuracy generally increases with the number of eigenvectors,
but then plateaus. I believe this is the result of overfitting the
training data:
after a certain point, the space spanned by the eigenvectors is too high-dimensional
to accurately discriminate among details of faces. Probably the best
way to decide how many eigenvectors to use is to look at the actual eigenvalues
to see if there is a big dropoff somewhere; otherwise, running an experiment
like this one would be a good way.
Here are a few errors from my face detection:
downey confused with eckart
hoyt confused with gauthier
In both cases, the right face was in the top four.
I found Aseem's face in this image:
and got this:
at a scale of 0.45.
I don't have a digital picture of myself. I found around guy on google
images, and cropped his portrait:
to get this:
at a scale of 0.28.
I marked the group1 image to find (min_scale = 0.85, max_scale = 1.1, step
= 0.05):
and I also found a random group of girls on google images, who turned out
to be hard to detect:
I used a min_scale of 0.68, max_scale of 0.90, and step of 0.02.
I couldn't get the recognition any better than this. The texture areas
and the one girl's neckace kept fooling it, as did the differences in orientation
of the faces,
I think. Also, maybe if the training data had more women ...