Project#3 Report

Brigette Huang

Testing recognition with cropped class images

Procedure 1.

The first procedure is to use the non-smilling students to compute 10 eigenfaces.  Here are my non-smiling ghosts.

From left to right are AverageFace, EigenGhost#0 to #4 and the second row are Ghost#5 to 9. The image have been enlarged to (100pix X100pix).

average_faceb.tga (30044 bytes)eigen_face_0b.tga (30044 bytes)eigen_face_1b.tga (30044 bytes)eigen_face_2b.tga (30044 bytes)eigen_face_3b.tga (30044 bytes)eigen_face_4b.tga (30044 bytes)

eigen_face_5b.tga (30044 bytes)eigen_face_6b.tga (30044 bytes)eigen_face_7b.tga (30044 bytes)eigen_face_8b.tga (30044 bytes)eigen_face_9b.tga (30044 bytes)

Procedure 3.

In this procedure, we need to use the userbase we created in procedure 2 to test how well the program can recognize face with various number of face database provided.   First, let us look at our result data and the graph to show the trend:

EigenFaces # Recognized# EigenFaces# Recognized# EigenFaces# Recognized#
1 5 13 17 25 22
3 10 15 18 27 22
5 13 17 20 29 22
7 17 19 21 31 22
9 19 21 22 33 22
11 18 23 23    

graph1.jpg (25836 bytes)

From the line graph attached above, we can see:

  1. The numbers of students recognized are exponentially grow in respect to the numbers of eigenfaces used.  This is not quite obvious to me.  I would expected result to be more linearly trend. However, since eigenfaces are constructed by least mean square error solutions of N-dimension linear equation, even only one single eigenface should be able to define some major space in the face space.
  2. The trend shows the possibility of data compression. As you can see that the rate of grow from one face to two face are 100%, and later slow down to 20 to 10%. Therefore, with very small set of the eigenfaces, we can actually recognize more of the key features of various faces. This is a great news in terms of database overhead if we need to implement some sorts of face recognition security systems.
  3. The errors in this procedure.  Certain people have never been recognized in the experiment; for example, Downey been recognized as Eckart, Hu is always been called Gau, Ko is recognized as Zhang, and Su been Lester, etc.  Part of these errors are my mistake since I downloaded the file long time ago and never collected the bugs anounced. In addition, it is very likely that some person can has faces that are more close to other person's face in face space when showing different expressions.  Therefore, the error are reasonable.

 

Cropping and finding faces

Procedure 1. Cropping faces

As required, we need to crop Aseem's face and one of my own image. The results are both quite reasonable. Attached are the "Before" and "After" pictures...

crop_me.tga (8130 bytes)                                                                                             crop_aseem.tga (9093 bytes)

 me.tga (360044 bytes)     aseem.tga (393768 bytes)

Name min_scale max_scale step
Brigette 0.4 0.5 0.01
Aseem 0.45 0.55 0.1

 

Procedure 2. Boxing faces

The next fun experiment is to box faces! We need to try to box up three of our classmates' faces, and even more, some of our friends' faces... this is a real challenge! Let's see the result.

mark_g1.tga (360018 bytes)    mark_g2.tga (360018 bytes)

Picture min_scale max_scale step
Group1 1.0 1.1 0.1
Grad 1 1.2 1.4 0.05

Notice that i know that the scale of group1 fellow's face are one, so I let the machine take a break for running fewer swing.  I put 5 faces as the parameters in the second group image, one of the box is boxing on the jeans. As mentioned in the project instruction, the light and blank area does confuse the program a bit.

A few points about my experiments:

  1. Scaling does matters.  I found that the program works it's best in the range of 0.5 to 1.5 scaling. I don't have too much luck on large or extreme small face images.

  2. No luck on images with complex background.   I tried the program on a few outdoor images with trees, leaves, etc.  No matter how I tried to precisely scale the search, it still found none of the faces.

 

VerifyFaces Implementation

To implement the function verifyFaces is not a big job, afterall, it only requires 4 lines of code. the technique is thresholding MSE to determine if the face belong to the user.

I ran simulations by thresholding the MSE from 2000.0 to 19000.0 trying to determine the best MSE value.  Attached below are the data result and trend graph:

MSE threshold# correctly verified # MSE threshold# correctly verified # MSE threshold# correctly verified #
2000 0 8000 6 14000 10
3000 1 9000 7 15000 11
4000 2 10000 9 16000 11
5000 2 11000 9 17000 11
6000 5 12000 10 18000 11
7000 5 13000 10 19000 11

 

msegraph.jpg (28529 bytes)

We can see that the number of correctly verified faces seemed to have linear trend in respect to the MSE Threshold value. The cut-off MSE value between 9000 to 10000 is the best as shown in the graph.  At the value, we do get false positive cases, in the same reason that we considered for the previous recognizing experiment.