Eigenfaces by Ian

Overall, I'm pretty happy with the whole system. Of course, it's pretty annoying to me that I could have done the entire thing in Matlab in about an hour (or maybe less), but instead I had to mess around with Visual Studio. Anyway, let's get right to the required stuff.


Recognition

First, I computed the average face and 10 eigenfaces from the nonsmiling photos.

Then, I tried to recognize everyone in the smiling photos. I tried using different numbers of eigenfaces between 1 and 31. (The assignment asked for 33, but 32 eigenfaces and an average face can represent all 33 images.)

Anyway, I don't have Microsoft Excel, and this isn't a massive amount of data that needs graphical visualization, so here is a table of the results.

Number of Eigenfaces 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31
Correct Recognitions 3 12 18 18 20 21 19 19 20 22 23 23 23 23 23 23

In general, it seems to work better with more eigenfaces. This sort of makes sense, in that if we just compared the faces pixel by pixel, and didn't use eigenfaces at all, we should get decent results. However, this isn't really practical when the number of faces goes into the thousands. Fortuately, using a small number of eigenfaces (11, say) worked almost as well as using all of them.

There could also be some sort of overfitting at play here. It's possible that using eigenfaces captures the "essence" of a face better than just comparing pixel values. Changing a face from a frown to a smile changes many of the pixels, but perhaps this change can be represented by a very small change in eigenspace. If so, using a smaller number of eigenfaces could actually work better than using all of them. That didn't happen here, but the sample size we used is way too small to be meaningful.


Here are some mistakes.

Query Face (nonsmiling) Best Match (smiling) Correct Face (smiling)

Perfectly reasonable, if you ask me. I can just barely do this myself for the cropped faces. Also, the difference between a nonsmiling and smiling face on a single person is often much greater than the difference between the nonsmiling faces of two different people. I'm actually amazed that eigenfaces did as well as it did. I would have guessed that recogizing the same person under different facial expressions requires extracting spatial features from the images.


Face Finding

Using 10 eigenfaces (25-by-25), I tried to locate faces in a few images.

I used a min scale of 0.45, a max scale of 0.55, and an 0.01 step for this.


I used a min scale of 0.15, a max scale of 0.25, and an 0.01 step for this.


I used a min scale of 0.95, a max scale of 1.05, and an 0.01 step for this.


I used a min scale of 0.5, a max scale of 1.0, and an 0.05 step for this.


This last example (from Ocean's Eleven) shows that this technique is both racist and ageist. Seriously, though, our class isn't that diverse. The eigenface database was created using a bunch of 25 year old white and Asian male faces. It's not terribly surprising that it fails to detect Elliott Gould and Don Cheadle. Elliott Gould's weird sunglasses don't help matters either.

Oh yeah, I implemented that extra credit speedup also. Without it, I almost certainly wouldn't have been able to create those beautiful 128-by-128 eigenfaces at the top of the page.