Calibration
Many of the 3D photography techniques need camera calibration. For example, Space Carving needs calibrated images as input. Some active 3D photography technieques also need projector calibration, such as Hierarchical Stripe.
Cameras and Projectors can be considered as inverse devices to each other. A camera records a bunch of rays to an image while a projector takes an image as input and shoots out a bunch of rays. Therefore, they two share the same mathematical model.
There are a fair amount of work done in camera calibration. Here we recommend you to use Jean-yves' Calibration Toolbox, which integrates corner detection, intrinsic caliberation and extrinsic caliberation, including radial distortion up to 6th order and other convenient functions. This tool box has been implemented in both C/C++ and Matlab.
Some comments
Due some legal issues, Jean-Yves can't share his projector calibration codes (done in Intel) with us (he really wants to!). As a result, you may want to modify his camera calibration toolbox to cope with projector calibration. Here are something you may find useful in reading his codes:
For those of who are not familiar with matlab, here is a jump start.
Corner dection is the first step in the calibration. Jean-Yves uses a Harris Corner detection algorithm. The original Harris corner detection paper was published in some unknown conference which is hard to locate. The feature detection section in the this master thesis gives you some details about it.
It is important to model the projector optics
the same way the camera optics is modeled (fourth order radial
distortion with tangential distortion). It is not necessary to use the 6th order distortion coeffecients unless a fisheye lens is used, which is not recommended for scanning.
Typically, use about 30 plane positions in space to calibrate
{camera+projector}
accurately. For each plane position on which a light gray checkerboard in pasted, a camera image is acqired
and an image with a checkerboard projected from the projector (2
simultaneous images).
These 2N images are then used to calibrate camera, then projector, and
then
both at the same time for refinement.
It is important that the plane covers the whole volume where the
object
will be sitting and as many
orientations as posible.
For triangulation, taking into account the distortion
parameters
is not straighforward because the triangulation equation is not linear
anymore.
However, a small iteration can be used to solve for the non-linear
equation,
by first
assuming no distortions, and then refine the result by inserting the
distortions.
It is really important to eliminate any weird distortion of 3D space.
Typically around 2 hours (acquisition + computation) should be spent on calibrating the scanner. It is absolutely not a waste of time. It saves a lot of work for the subsequent stages to get totally undistorted scans. This process is highly recommended.
more to come...