Ray Tracer Road Map
Overview
Recall how a raytracer functions, as described in class. The raytracer iterates through every pixel in the image, and, using some illumination model determines what color intensity is assigned to the pixel. The skeleton code goes through several functions to do this, but the real interest occurs when you get to the RayTracer::traceRay( ) function. This is also where a handy diagram can start to be of help (shown below). The diagram is supposed to represent the flow of the ray tracing program. Each separately colored box represents a different function that you will have to add some code to in order to finish the project.
The traceRay function takes three arguments: a ray that needs to be traced, a threshold vector, and an integer that controls the depth of recursion for the ray tracer. It returns a color (represented through a three-dimensional vector). The signature of this function should make its purpose clear: a ray is input to it, and the function determines the color that should be projected back to whatever it was that cast the ray. In order to do this, the first thing traceRay wants to know is if the ray actually hits anything that is worth looking at, or if it wanders off into the lonely darkness of space. This is where a test for ray/object intersection occurs (how intersection works is described below). If no intersection occurs, then a black color is returned (the zero vector). If an intersection occurs some work has to be done to figure out what the color is.
How the color for a ray is determined
Before you try to code up a solution for the color model, be sure that you have a pretty good idea of how to do it on paper. Some of the vector math can start to look pretty overwhelming if you don't know what you are doing. Even when you do know what you are doing, their can be some rather tricky pitfalls... three different things contribute to the color of a certain object:
How Intersections work
Because ray/object intersections are typically the bottleneck of ray tracing programs, the skeleton code uses a few techniques to try to speed it up. If an objects position is described by a transformation M, then M inverse is applied to both the object and the ray. This transforms the object back to the origin, which simplifies intersection testing.
For each object, the intersection testing occurs in the objects intersectLocal function. By the time control has been given to this function, the object and ray have already been translated back to the origin, so you don't have to worry about doing that. Additionally, you don't have to worry about translating the object and ray back to their original position; this is done after the function exits. You only have to worry about the intersection of the input ray and the basic, untransformed object (i.e., untransformed spheres are centered at the origin with a radius of one ). If an intersection is found, then some information about the object needs to be calculated. The Normal and t-value for the intersection need to be properly set, so that the information can later be used to calculate lighting information. All of this information should be stored in the "i" argument that is passed by reference into intersectLocal.
Tour of files
To begin, there are quite a few files associated with this project. The files are grouped into a directory structure that should make it easier to navigate through the source code:
trace_code
– project files, Makefile
src
– top-level raytrace code. Reflection,
Refraction, and most other optional
effects go here.
fileio
– file input/output code. You
shouldn’t ever need to modify this.
parser
– parsing code; you will want to modify this if you change the .ray format at
all.
scene
– classes that represent the “scene.”
Shading, Lighting,
and Texture mapping code goes here.
SceneObjects
– geometry classes. Triangle
and Sphere
Intersection code goes here.
ui
– the user interface classes. Add
Custom UI Controls here.
vecmath
– vector and matrix math support code. You
don’t need to modify this.
vecmath/vec.h
This is a good file to become very familiar with.
It contains two important classes, vec3f and vec4f.
These are a 3-component vector and 4-component vectors, respectively.
Almost all the math you do will be in terms of vectors.
The interface is fairly intuitive, letting you do most algebra operations
on vectors using overloaded operators: vector*othervector and vector^othervector for dot and cross products, respectively.
An inline documentation about the vector operators can be found near the top of file vecmath/vec.h
scene/ray.h
A useful one to know as well. A
ray is basically a position and a direction, someplace in 3D space.
Also defined in this file is an isect, which contains information about
the point where a ray intersected an object.
It contains, among other things, a pointer to the object, the surface
normal at the intersection, and a “t” value to use in calculating the
intersection point.
RayTracer.cpp
This is where raytracing begins. For
each pixel in the image, traceRay() gets called.
It gets passed a pointer to the scene geometry information, a ray, and
two variables you can use to manage recursion.
(Remember, adding in recursion is your job.)
It calls scene->intersect() to find the first intersection of that ray
with an object, and gives you an isect to work with.
scene/material.cpp
When an intersection happens, you need to figure out what color the
surface is at that point. For that,
you need a handy shading model, and someplace in the program that knows how to
do it. That’s what goes in this
file. This is the place where color
gets calculated from material properties. Right
now it only does one thing: return a diffuse color.
Another thing that you will find in the material.{cpp,h} files is texture mapping support; we have included a skeleton to make adding this easier. You need only implement the getMappedValue() function in the TextureMap class to get this going properly (along with calculating uv-coordinates at intersect time). We have provided you with a lower-level interface to the bitmap which you can use to implement this.
scene/light.cpp
As part of shading, you generally need to look at light sources.
This is the code that knows how to handle them.
This is a good place to deal with attenuation.
parser/Parser.cpp;
parser/Token.cpp
Okay, after looking this all over you decide life is too simple, and
you’re ready to add extra features, like spotlights for instance.
How do you work them in to the scene graph?
That’s where this file comes in. As
a .ray file is opened, it is converted into a stream of tokens (Token.cpp,
Tokenizer.cpp), and then each the tokens are processed in turn into the scene
graph (Parser.cpp). Additions to
the file format would start in the processScene() function in this file.
SceneObjects/*.cpp
This is where most of the intersection code is written.
Look in here to get an idea of how intersections work and to implement
triangle and sphere intersections.
ui/TraceUI.cpp;
ui/GraphicalUI.cpp; ui/CommandLineUI.cpp
This is where the user interface code resides.
Look at how the example sliders are implemented.
This is where you will put any custom controls you create.
You can access these controls by
adding an “extern TraceUI* traceUI” to the top of each file you need UI
access in (the global ‘traceUI’ object resides in ‘main.cpp’).
Then, any code inside that file can make calls to
“traceUI->someFunction()”.
If you want to add new functionality to the UI, you should add a class variable to the base TraceUI class and then add the FLTK control to the GraphicalUI class. CommandLineUI is if you want to run the raytracer off the command line (often useful for testing), and you may wish to add extra command lilne options for your new features as well.
Important classes
const
SceneObject *obj;
A pointer to the object intersected, in case you need to access
the object.
double
t;
This is the linear value from where the ray intersected the object.
In other words, if you wrote out the ray formula as [P + t*D] where
P is
the position of the ray and D is the direction, this t is the t in the formula.
You can use it to find the point in 3D space where the intersection took
place.
vec3
N;
This is the surface normal where the intersection happened.
const
Material &getMaterial() const;
And finally, this method gets the material properties of the surface at
the intersection. It’s just a
time-saver, using the intersection’s own material pointer if it’s defined,
or getting the object material if not.
vec2f
uvParameters;
This should store the surface parametric coordinates of the intersection,
which will be used in the texture mapping stage to find the location in the
2-dimensional map. Should range
from 0 to 1. These really only need to be used
if you wish to do texture mapping.
vec3
ke();
Emmisive property
vec3
ka();
Ambient property
vec3
ks();
Specular property
vec3
kr();
Reflective property
vec3
kd();
Diffuse property
vec3
kt();
Transmissive property
double
shininess();
The shininess exponent when calculating specular highlights.
double
index();
Index of refraction for use in forming transmitted rays.
Note that for each of these functions you must pass in an isect& since the functions may need the isect to figure out parametric coordinates of the intersection for 2D texture mapping, space coordinates for use with solid textures, etc.