Patrick Mar’s Project 4 Write Up

 

Testing recognition with cropped class images

 

 

Average face

 

Eigenfaces

 

 

Questions

  1. Describe the trends you see in your plots.   Discuss the tradeoffs; how many eigenfaces should one use?  Is there a clear answer?  

The number of correct recognitions is fairly low when there are fewer than 7 eigenfaces.  At around 10-11 eigenfaces, the graph tends to plateau with only small variations afterwards.  Obviously, the running time gets worse as we increase the number of eigenfaces.  Therefore, if running time is an issue, about 10 to 15 eigenfaces will suffice for bringing about relatively decent results.  This answer will vary depending on the sample of training images, among other conditions.   

  1. You likely saw some recognition errors in step 3; show images of a couple.  How reasonable were the mistakes?  Did the correct answer at least appear highly in the sorted results?  

 was mistaken for .

 

 was mistaken for .

The first of the above mistakes is reasonable.  The expressions and facial features have many similarities.  The second mistake is a bit questionable although both faces have similar curves (such as around the mouth) that may have contributed to the algorithm failing.  The good thing is that most of the time, I noticed that the correct answer was usually in the top three or four choices.  

 

 

 

Cropping and finding faces

 

Elf picture original  Cropped face

 

 

Ok, this is spooky…

 

Original picture of me  Cropped  

 

In my try to crop my face, I decided to take a risk.  I wanted to know how the algorithm would deal with a face seen from a different angle when all the training images are frontal views.  I used 0.1,0.3,0.1 for my parameters of min_scale, max_scale, and scale respectively.  As you can see the cropped image is in an area on the wall.  However, if you look closely at the cropped image, you will immediately see why the algorithm might have been fooled.  To the lower left of the cropped image, there appears to be some darker, shadowy spots on the wall.  These spots seem to form a ghostly looking face – one can see the rough outlines of two eyes, a nose and part of a mouth.  So perhaps, given the frontal view of the training images, the algorithm is actually working.    

 

 

Group Image 1

In this group photo, my algorithm had problems finding the faces.  It found Jon’s face rather well but completely missed the other two faces.  Because of my overlap error, I had it identify many windows just to give the algorithm some more chances but it was fooled by the folds of the leather jacket.  

 

 

 

Group Image 2

Unfortunately, I wasn’t able to get rid of overlapping, but the face detection worked out fairly accurately.  (Note: If you decide to test this with my code, please use n=15, not 3.  Due to the overlapping, you will not get all three faces if you use too few windows.)

 

Group Image 3 Original

 

Group Image 3 Result

Above is a group image from my high school years.  The darkness of the picture might have affected the results.  Unfortunately, the algorithm was fooled by a patch of hair and only identified one face out of the five correctly.  

 

 

Questions

  1. What min_scale, max_scale, and scale step did you use for each image?  

Picture of just me: 0.1, 0.3, 0.1

Group 1: 0.5, 0.5, 0.01

            Group 2: 0.5, 0.5, 0.01

            Group 3: 0.3, 0.5, 0.05

 

  1. Did your attempt to find faces result in any false positives and/or false negatives?  Discuss each mistake, and why you think they might have occurred.

As I mentioned before, group image 2 (the test image) worked fairly well (disregarding the overlapping).  The elf image was also cropped really well.  The other images were difficult to detect though.  In some cases, it may simply have been the fact that I didn’t choose the window sizes wisely.  Another reason could be that I didn’t account for lighter areas, something that I found really hard to do as you don’t want to just factor out light areas because they may very well be a face.  We see that the features of the environment can also play a role if there are things in the image that potentially resemble a face.  This is true for the picture of me (the shape on the wall) as well as the folds of the jacket in group image 1.  The poor matching in group image 3 might be due to the lighting of the hair against the background of darkness.  Also, I noticed that people who appear in the training images tend to have a better chance of being detected, as one would intuitively expect.

 

Extra Credit

 

I implemented verify_face() and used the following line to test it:

 

mainverifyface smiling_cropped/27.tga base.user nonsmiling_cropped/27 eigenfaces.face 500

 

The algorithm correctly verified that

 

 

 

is the same person as

 

 

which is the result we wanted.