CSE 455 Project 4: Photometric Stereo

Assigned: Friday, February 26
Due: Friday, March 12
: Individual

In this project, you will be implementing a Photometric Stereo algorithm. Photometric stereo is a method for reconstruction of the depth map (the surface) and albedo of an object from several photographs of the object in a fixed position but under different lighting conditions. Your job will be to write MATLAB functions that will be able to calibrate the lighting directions, find the best fit normal and albedo at each pixel, and finally find a surface which best matches the solved normals. All these parts were explained in class, please refer to your notes and slides for detailed explanation. Here is an example result of the algorithm:


We have provided six different datasets of diffuse (or nearly diffuse) objects for you to try out:







They can be downloaded here (in tga format) or here (in png format).

After you unzip it, you will see seven directories. They correspond to the above six objects and a chrome ball. Each of the directories contains 12 images the object photographed under a different lighting direction. The chrome ball images should be used to compute lighting directions for computing normal maps for the other six objects.

Under each of the directory, there is also a mask image. The masks indicate what pixels are on the objects. You only need to estimate normals and compute height values for these.

From the chrome ball images, you should estimate the lighting directions. In particular you should:


Code to Write

1.      (15%) Calibration: Before we can extract normals from images, we have to calibrate our capture setup. This includes determining the lighting intensity and direction, as well as the camera response function. For this project, we have already taken care of two of these tasks: First, we have linearized the camera response function, so you can treat pixel values as intensities. Second, we have balanced all the light sources to each other, so that they all appear to have the same brightness. You'll be solving for albedos relative to this brightness, which you can just assume is 1 in some arbitrary units. In other words, you don't need to think too much about the intensity of the light sources.

The one remaining calibration step we have left for you is calibrating lighting directions. One method of determining the direction of point light sources is to photograph a shiny chrome sphere in the same location as all the other objects. Since we know the shape of this object, we can determine the normal at any given point on its surface, and therefore we can also compute the reflection direction for the brightest spot on the surface.

Write a function that gets as input the file chrome.txt (see Data section), reads the sphere images and the mask and returns the matrix L of size 3x12 that specifies the lighting direction per image (there are 12 images per object, see Data section).

L = recoverLighting('chrome.txt');

Write a function that gets as input the txt file that specifies the names of the images (see Data section) and outputs the images and the mask:


2.      (25%) Recovering surface normals: The appearance of diffuse objects can be modeled as where I is the pixel intensity, kd is the albedo,  L is the lighting direction (a unit vector), and n is the unit surface normal. (Since our images are already balanced as described above, we can assume the incoming radiance from each light is 1.) Assuming a single color channel, we can rewrite this as so the unknowns are together. With three or more different image samples under different lighting, we can solve for the product by solving a linear least squares problem (for this you need to write the above equation in a matrix form as we did in class). The objective function is:

To help deal with shadows and noise in dark pixels, its helpful to weight the solution by the pixel intensity: in other words, multiply by Ii:

The objective Q is then minimized with respect to g. Once we have the vector g = kd * n, the length of the vector is kd and the normalized direction gives n.

Note that weighting each term by the image intensity reduces the influence of shadowed regions; however, it has the drawback of overweighting saturated pixels, due to specular highlights, for example.

Write a function that gets as input: the images, mask and the lighting matrix from part 1. The function estimates the surface normals and outputs them.

N = recoverNormals(imgs,mask,L);


  1. (25%) Recovering the albedo: This gives a way to get the normal and albedo for one color channel. Once we have a normal n for each pixel, we can solve for the albedos by another least squares solution. But this one ends up being a simple projection. The objective function is

To minimize it, differentiate with respect to kd, and set to zero:

This can be done for each channel independently to obtain a per-channel albedo.

Write a function that gets as input the images, mask, recovered surface normals and lighting and outputs the albedos.

albedos = recoverAlbedos(imgs,mask,L,N);
  1. (25%) Recovering the surface from normals: As described in class, we have 2 constraints for each pixel inside the object

    -nx = nz  z(x+1,y) - nz  z(x,y)
    -ny = nz  z(x,y+1) - nz  z(x,y)

    and 1 constraint for the pixels on the boundary:

    nx z(x,y+1) - nx z(x,y) =   ny z(x+1,y) -  ny z(x,y)

    all these can be written as matrix equation Mz=b, and solved using least squares. Note that the size of matrix M can be very large however most of the entries in M are zero -- this is a very sparse matrix. In MATLAB you can define the matrix to be sparse.

    Write a function that gets as input the surface normals and the mask of the object, and outputs the surface depth map.
surface = normals2surface(N,mask);


Assignment Writeup (10%)

For this project we are asking you to submit a document (HTML/WORD/PDF) that shows the results and dicusses the implementation and the algorithm. In particular, your document should include:

  1. For each object:
  2. Discussion of the results: what worked and what didn't work? which objects were reconstructed better/worse? why? discuss the issues you had with the algorithm, how would you improve it?
  3. What will happen if you will run the algorithm with less images and with different choice of images? How are the reconstruction results affected by this?
  4. What will happen if you will use all the pixels instead only the pixels in the mask? (discuss each part of the algorithm).


Extra Credit

Extra credit problems do not count towards your score for the project, but are accumulated separately. One star (*) is awarded for easier extra credit problems, with more stars for more difficult problems. At the end of the course, the stars you've earned will be used to adjust your final grade upward by at most 5%.

         (*) Modify the objective function for surface reconstruction and rewrite the matrix accordingly. One idea is to include a "regularization" (smoothing) term. This can sometimes help reduce noise in your solutions. Another idea is to weight each edge constraint by some measure of confidence. For instance, you might want the constraints for dark pixels where the normal estimate is bad to be weighted less heavily than bright pixels where the estimate is good.

         (*) Capture your own source images and reconstruct them

         (**) Try the example based photometric stereo method, using the data on their website.


To turn in your project, submit a single zip file to the dropbox at:


The zip file should be named project4.zip and contain all of your M-files and your assignment write up in HTML, WORD or PDF format. The zip file should include any images needed for your write up.


  1. Woodham, Optical Engineering, 1980, Photometric Stereo.
  2. Hayakawa, Journal of the Optical Society of America, 1994, Photometric stereo under a light source with arbitrary motion.
  3. Example-Based Photometric Stereo: Shape Reconstruction with General, Varying BRDFs A. Hertzmann, S. M. Seitz. IEEE Trans. PAMI. August 2005. Vol. 27, No. 8, pages 1254-1264.