Photometric Stereo

Project # 3 CSEP 576

Shahid Razzaq

 

   

Summary:

            Overall i think the samples worked very well.  The conditions is which they were taken were very ideal.  It would be interesting to see how pictures taken at home would yield a 3D object.  No doubt there are a lot of challenges there, specially:
 

  •  Linearization of camera response function (exposure settings manual)

  •  Light source balancing:  though I don't see how that will be a big issue, if you use the same light source, it should be balanced w.r.t the different positions of light source.

Other approaches to the 3D problem:

              Instead of having multiple light sources, how about having just one light source and then moving the object about its center by certain fixed degrees?  The resultant 3D model will have to be stitched to give the overall 360 view of the object!  It would depend a lot on the complexity of the object, but having small rotation per picture in both right-left and up-down direction would yield a good result.  The sample code will have to do the job of stitching the 3D object.

What worked and what didn't:

           Having been shaken by the panorama stitching project, i gave this one its due time and the results look good.  I don't think there is anything that isn't working.  All the normals, albedos and depths are matching that of the sample.  I did do some extra credit work that i'll explain below.

Results:

Budha:

       Did not encounter any issues with the Budha.  The depths were almost the same as the sample (approximately +/-1 unit z value).  Similarly the normals also matched that of the sample.

 

Cat:

       As with the budha, the cat didn't pose a challenge also and its surface was simpler than the budha with not many tiny curves.  The albedos, normals are similar to the sample, for the depth i didn't look explicitly as it was working with the budha, so i assumed it would also work here.  The twisting and turning of the final 3D shape rendered a shape.

 

The Rock:

       I was wary of the rock as I thought it will have some black spots that would be sampled out of the image give by the skeleton code, but that didn't happen.  Infact the output was like:

 

Gray Sphere:

       This was a good place to check out the normals as we knew how they would form.  The normals came out just right, pointing away from the center of the sphere. I am not sure about the RGB view though.  I am not convinced that it gives a very good representation of the normals of the surface.  To add to it, i think we should display the x,y,z cordinates of the normal at at point hovered over by the user using the mouse.  This can be displayed in the status bar?

 

 

The Owl:

       The EYE!  Whats up with the eye? I didn't go into this very much as i was busy doing the other bells and whistles.  The sample provided also had the same problem where the center of the eye was bulging out (see depth view without albedo).  Although with my app it wasn't as prominent, never-the-less this is something that should be looked into.

 

 

Extra Credit:

1.  Bi-linear Interpolation for finding Center of Highlight:

           When finding the center of the highlight, we are originally limited to the pixel width as the grain size or delta-x/delta-y.  I implemented a bilinear interpolation technique which interpolated an adjustable number of interpolated data between the known points in 2D.  The diagram on the right shows how the interpolation was done.  A better approach is to average the interpolated data between horizontal edges and the vertical edges, instead of  using only the vertical ones that i did.  But considering the overall complexity of the image and the lack of dramatic variations in the pixel values between the neighbors, this implementation sufficed. 

         One is issue with that is that it dramatically slows down the processing of the image and now you have a huge image to go through, saw by using 10x10 pixels for 1 original image pixel, this increase the image size 100 fold.  For this i minized the size of the image by only considering the highlighted area(s) of the image and left the rest, to be added later in calculating the overall X,Y of the highlight.  The method used for calculating the center of highlight was convolution using a 5x5 matrix.

 

2.  Experiment with using only horizontal neighbors instead of right and bottom neighbors for vector calculation:

        This doesn't give results which are as good as the ones gotten by using right and bottom neighbors, but the final output is interesting.  You can see horizontal lines in the final depth image below.  These are offsets from one row to another which are caused because each row has no constraint on the row above or below.  So each row is like an individual object.