Project 2: Trace


Assigned Wednesday, October 11
Due Thursday, October 26 by 11:59pm
Artifact Due Monday, October 30 by 11:59pm
Help Sessions
Project TA Edward Zhang
Artifact Turn-in
Artifact Winners Winners
>

Overview


Description

Trace is a program that constructs recursively ray-traced images of fairly simple scenes. It is similar in functionality to the POV-Ray raytracer. You may want to browse around the POV-Ray web site for artifact and extra credit inspiration. POV-Ray is free, so if you want a taste of what a really powerful raytracer can do, go check it out!

Getting Started

Visit here for help checking out code.

Inside the skeleton code, there is a "scenes" subdirectory which contain sample scene files (all the files with the .ray extension). These are text files that describe some geometry and the material that should be applied to them. Refer to this page for a brief explanation. Load a scene and you'll be able to see the helpful debug view on the right. Press Render in the UI to see the rendering.

Before you begin coding, you should run the sample solution; it is linked to on the right. It has all of the requirements implemented, along with some extra features. Definitely try clicking on the rendered scene and then check out the helpful debug view!

The Trace project is a very large collection of files and object-oriented C++ code. Fortunately, you only need to work directly with a subset of it. However, you will probably want to spend a bit of time getting familiar with the layout and class hierarchy at first, so that when you code, you know what classes and methods are available for your use.

The starting point for where ray tracing begins, and where you will be needing to add a lot of functionality, is in the RayTracer.cpp file. This is a good file to start studying and exploring what methods get called and what they do. In addition, the raytracer features a debugging window that allows you to see individual rays bouncing around the scene. This window provides a lot of visual feedback that can be enormously useful when debugging your application. Look at this web page for a more detailed explanation of how to use the debugging window (unfortunately, some parts of that page are still out of date).

Requirements


Implement the features described below. The skeleton code has comments marked with // REQUIREMENT denoting the locations at which you probably want to add some code.

After running the sample solution, you should build the skeleton code and see how it compares. You will probably notice that there is a significant difference in the quality of images rendered with the two versions. This suggests what parts of the raytracer have been written and what parts are left undone.

If you compare the outputs of the skeleton and solution, you will see that most of the basic geometry-handling code is done. The skeleton code is able to cast rays into an image and draw color on the screen, resulting in some flat-shaded polygonal shapes. The skeleton code is doing ray-casting and nothing more. Furthermore, the triangle and sphere primitives will not appear. While all the code to cast a ray exists, not all of the object intersections code is there. You need to implement sphere and triangle intersections and expand ray-casting into ray-tracing by adding support for reflected and refracted rays. You also must implement the Blinn-Phong specular-reflection model and include support for color-filtered shadows cast through transparent objects.

Specifically, each group must implement recursive ray tracing as described in class. This entails making the extensions to the program listed below. Your ray tracer should recursively trace rays to account for these. Recursion should proceed to a maximum depth as set by the user.

You may assume that objects are not nested inside other objects. If a refracted ray enters a solid object, it will pass completely through the object and back outside before refracting into another object. Improving your refraction code to handle more general cases such as a refractive sphere contained inside another refractive sphere is an extra credit option as described below. In addition, you may assume that the camera itself is not placed inside an object. The initial rays that are sent out through the projection plane will always be moving through air.

Note: When calculating the intersection for triangles, you may have 3 values for the normal and the material (one for each vertex). You will have to interpolate these values in order to get the proper value. Don't forget to renormalize the interpreted normal! If the normal is not available, you may assume that it is just the normal used in the intersection calculation, i.e., the normal to the plane that the triangle lies in. dragon.ray, shell.ray, sierpinski.ray, house.ray, and trimesh2.ray all require material interpolation support in order to appear as they are rendered by the sample solution. If the vertices have texture uv coordinates, you will have to interpolate these to get the correct texture coordinate.

For this project, you are not required to implement any bells or whistles. At this time, we hope that you are already in the habit of thinking about extra features when you start the project. Even the simple bells and whistles can make significant changes in your ray traced scenes.

Ray Tracer Road Map


Recall how a raytracer functions, as described in class. The raytracer iterates through every pixel in the image, and, using some illumination model determines what color intensity is assigned to the pixel. The skeleton code goes through several functions to do this, but the real interest occurs when you get to the RayTracer::traceRay( ) function. This is also where a handy diagram can start to be of help (shown below). The diagram is supposed to represent the flow of the ray tracing program. Each separately colored box represents a different function that you will have to add some code to in order to finish the project.

The traceRay function takes three arguments: a ray that needs to be traced, a threshold vector, and an integer that controls the depth of recursion for the ray tracer. It returns a color (represented through a three-dimensional vector). The signature of this function should make its purpose clear: a ray is input to it, and the function determines the color that should be projected back to whatever it was that cast the ray. In order to do this, the first thing traceRay wants to know is if the ray actually hits anything that is worth looking at, or if it wanders off into the lonely darkness of space. This is where a test for ray/object intersection occurs (How intersections work is described below). If no intersection occurs, then a black color is returned (the zero vector). If an intersection occurs, some work has to be done to figure out what the color is.

Flow Chart

How the color is determined

Before you try to code up a solution for the color model, be sure that you have a pretty good idea of how to do it on paper. Some of the vector math can start to look pretty overwhelming if you don't know what you are doing. Even when you do know what you are doing, there can be some rather tricky pitfalls... Three different things contribute to the color of a certain object:

Direct Component

This component is calculated using the Blinn-Phong shading model. In general, you should iterate over every light in the scene and sum their individual contributions to the color intensity. There are some further complications here, because you have to deal with two different kinds of light sources when you are performing the computation: point lights and directional lights. Point lights have only position, and they radiate light outwards in all directions. Directional light only has a direction to it, and no starting position. Point light sources are better for modeling things such as light bulbs, while directional light is better for modeling something like sunshine. The two functions are worth noting because you need to implement the distance and shadow attenuation for each light in order for the Blinn-Phong shading model to work correctly.

Reflective Component

In order to calculate this component, you will need to calculate the reflection vector, and then make a recursive call to the traceRay function.

Refractive Component

Like the reflective component, this also needs to make recursive calls to the traceRay function. In addition, you also need to do some tests for total internal refraction, and handle this case accordingly.

How intersections work

Because ray/object intersections are typically the bottleneck of ray tracing programs, the skeleton code uses a few techniques to try to speed it up. If an object's position is described by a transformation M, then M inverse is applied to both the object and the ray. This transforms the object back to the origin, which simplifies intersection testing.

For each object, the intersection testing occurs in the object's intersectLocal function. By the time control has been given to this function, the object and ray have already been translated back to the origin, so you don't have to worry about doing that. Additionally, you don't have to worry about translating the object and ray back to their original position; this is done after the function exits. You only have to worry about the intersection of the input ray and the basic, untransformed object (i.e., untransformed spheres are centered at the origin with a radius of one). We also provide the constant RAY_EPSILON for your use. If an intersection is then found, some information about the object needs to be calculated. The Normal and t-value for the intersection need to be properly set, so that the information can later be used to calculate lighting information. All of this information should be stored in the ISect "i" argument that is passed by reference into intersectLocal. ISect is described in scene/ray.h.

Creating Your Own Scene


As you get into the project, you'll probably want to use some scenes of your own invention. There is a help page available about the file format. This file also describes the specifications of all the primitives you are required to implement. To create realistic refractive objects, you'll need their Indices of Refraction.

Exporting Models from Modeler

Modeler/Animator is a 3D modelling program that you will be implementing parts of in the next assignment. You can use the reference solution for Animator as a tool to help you set up scenes for this project.

The Modeler UI can export scenes to .ray files (File > Export Rayfile [As]). You can open Trace directly from Modeler through the File menu (either put the Modeler binary in the same folder as Trace, or Modeler will prompt you to find the Trace binary). Trace will automatically update if it detects that the .ray file has changed.

In Modeler, you'll want to set up not only your scene geometry, but also some lights and at least one camera so that Trace knows what viewpoint to render.

For convenience, there is a button in the toolbar for the scene view that creates a Camera object based on the current view. In the same toolbar, the drop-down menu lets you swap between the standard orthographic projections, the free-viewpoint perspective projection, and any of your scene cameras. The rayfile exporter will take the first camera it finds, but to avoid ambiguity, it's best to only have one camera in scenes you want to export.

For soft shadow effects in Trace, you'll also want to specify some area lights. Trace currently supports rectangular area emitters. While Modeler doesn't support area lights directly, it still can export them: Create a geometry node of type "Plane", and assign it the built-in "Emissive" material type. The object will automatically be converted to an area light when exporting to .ray.

For tips on how to use Modeler, see Animator Program Usage.

Importing Third Party 3D Models into Your Tracer

The ply2ray tool converts models of standard .ply format into .ray files. It assumes triangle meshes, and does not working for meshes with quadrilaterals. It should work in most cases. If ply2ray doesn't work or the models you have are not in .ply format, you can try either of the following two workflows.

Using MeshLab
Using Blender

Here are step-by-step instructions for importing models using Blender, a heavier weight tool for 3D models. You can do a lot more with it. Please feel free to explore its potentials.

Websites with 3D Models

Accelerating Ray-surface Intersection


The goal of this portion of the assignment is to speed up the ray-surface intersection module in your ray tracer. In particular, we want you to improve the running time of the program when ray tracing complex scenes containing large numbers of objects (they are usually triangles). There are two basic approaches to do this:

  1. Specialize and optimize the ray-object intersection test to run as fast as possible.
  2. Add data structures that speed the intersection query when there are many objects.

In this project, you will be spending most of your effort on approach 2, i.e. reducing the number of ray-object intersection tests. You should implement a Bounding Volume Hierarchy. Other acceleration structures can be implemented instead for extra credit; please let the TA know ahead of time if you are planning on doing so.

The sample scenes include several simple scenes and three complex test scenes: trimesh1, trimesh2, and trimesh3. You will notice that trimesh1 has per-vertex normals and materials, and trimesh2 has per-vertex materials but not normals. Per-vertex normals and materials imply interpolation of these quantities at the current ray-triangle intersection point (using barycentric coordinates).

Bounding Volume Hierarchies

A Bounding Volume Hierarchy is a simple but effective acceleration data structure. The simplest to implement version uses axis-aligned bounding boxes (AABBs). Building your BVH data structure will involve three different parts:
  1. For each primitive type, you will need to compute a world-space bounding box.
  2. You will then need to build the BVH tree. This can be done top-down or bottom-up. Top-down hierarchy construction is simpler, but may be less effective; at each node, decide what regions of the scene (and how many) to split off based on some heuristic (e.g. maximal reduction in total volume). Better tree structures can be obtained by bottom-up construction: after computing all the bounding boxes in a scene, iteratively merge AABBs using some heuristic (e.g. minimum surface area, minimal volume, tightest fit). Experiment with heuristics, branching factors, and tree balance!
  3. Finally, you will need to traverse your data structure during ray tracing, pruning away branches that cannot result in an intersection.

Acceleration Grading Criteria

The test scenes each contain up to thousands of triangles. A portion of your grade for this assignment will be based on the speed of your ray tracer running on these scenes. The faster you can render a picture, the higher your grade.

For grading on the rendering speed, the scenes will be traced at the specific size with one ray traced per pixel, and the rays should be traced with 5 levels of recursion, i.e. each ray should bounce 5 times. If during these bounces you strike surfaces with a zero specular reflectance and zero refraction, stop there. At each bounce, rays should be traced to all light sources, including shadow testing. Execution will be forced to single threading. The command for testing rendering speed looks like:

                    ray -b -w 400 -r 5 -t 1 in.ray out.bmp
                    
[For fairness, don't include other stop criteria (except for the one mentioned above) for -b option.]

At the grading session, you will run your code as well as the reference ray tracer on your machine. The score will be based on the ratio of these two runtimes. Be sure to have the reference solution downloaded and working prior to the grading session.

You are welcome to precompute scene-specific (but not viewpoint-specific) acceleration data structures and make other time-memory tradeoffs, but your precomputation time and memory use should be reasonable. Don't try to customize your ray tracer for the test scenes; we will also use other scenes during grading. If you have any questions about what constitutes a fair acceleration technique, ask us. Coding your inner loops in machine language is unfair. Using multiple processors is unfair. Compiling with optimization enabled is fair. In general, don't go overboard tuning aspects of your system that aren't related to tracing rays.

Testing Your Project


Once you implement the requirements, you'll be able to test it with the Compare tool we provide with the sample solution. This tool compares your program's output against the sample solution's. A README is included to help you use the tool.

It is in your best interest to test your ray tracer against this comparison tool. This is what the TAs will be using to grade the correctness and functionality of your ray tracer.

Don't worry if your solution doesn't give exactly the same output (rounding errors, among other things, are a fact of life). However, if there is a noticeable pattern in the errors, then that definitely means something is wrong! This tool is only to get an idea of where to look for problems. Note that your bells and whistles must be disabled for the tool to accurately check against the sample.

Memory Leaks: It is strongly advised your program does not have any memory leaks! For more information check out this summary. To check if you have a memory leak in ray.exe, render dragon full depth, antialiasing, max size, etc. on Windows, do a ctrl+shift+Esc and watch ray.exe's memory consumption. If it is increasing to no end you probably have a leak. It is likely the program will crash out after some point. Why we care?! It is extremely frustrating for your render to stop 91% complete night of Artifact turn in!

Turn-in Information


Please follow the general instructions here. More details below:

Artifact Submission

Each team is required to submit one artifact per person. Name the file [your-cse-netid].jpg or [your-cse-netid].png. The scene traced cannot be one of the provided .ray files but must at least be modified in some way (or a completely new scene). With each artifact, you may also submit some brief comments -- this can be as simple as two sentences describing the placement of the objects and lights to get the desired effect or a detailed description of the bells and whistles used to create the scene. The comments will be posted with your artifact on the webpage for voting.

Important: There is much room to create something really cool for your artifact, and we encourage you to spend a little time on it (at least more than 10 minutes!). In particular, your Trace artifact should not be too similar to one of the sample scenes. If your rendering is simply based on a tweaked version of a sample scene (e.g., you just rotated the camera X degrees in two axes, or changed the color of an object) then you will lose some (easy) points on turn in.

Bells and Whistles


If you implement any bells or whistles you need to provide examples of these features in effect. You should present your extra credit features at grading time either by rendering scenes that demonstrate the features during the grading session or by showing images you rendered in advance. You might need to pre-render images if they take a while to compute (longer than 30 seconds). These pre-rendered examples, if needed, must be included in your turnin directory on the project due date. The scenes you use for demonstrating features can be different from what you end up submitting as an artifact.

Important: You need to establish to our satisfaction that you've implemented the extension. Create test cases that clearly demonstrate the effect of the code you've added to the ray tracer. Sometimes different extensions can interact, making it hard to tell how each contributed to the final image, so it's also helpful to add controls to selectively enable and disable your extensions. In fact, we require that all extensions be disabled by default, with controls to turn them on one by one.

Both Marschner and Shirley's book and Foley, et al., are reasonable resources for implementing bells and whistles. In addition, Glassner's book on ray tracing is a very comprehensive exposition of a whole bunch of ways ray tracing can be expanded or optimized (and it's really well written). If you're planning on implementing any of these bells and whistles, you are encouraged to read the relevant sections in these books as well.

Here are some examples of effects you can get with ray tracing. Currently, none of these were created from past students' ray tracers.

Implement an adaptive termination criterion for tracing rays, based on ray contribution. Control the adaptation threshold with a slider or spinbox.

Modify your Monte Carlo antialiasing to jitter the sub-pixel samples, i.e., randomly sample positions within each sub-pixel region of a pixel (after breaking the pixel into a set of small squares). See Marschner Shirley 13.4.1. This is a form of stratified sampling to reduce noise in Monte Carlo path tracing. The noise introduced by jittering should be evident when casting 1 ray per pixel.

Modify shadow attenuation to use Beer's Law, so that the thicker objects cast darker shadows than thinner ones with the same transparency constant. (See Marschner Shirley p. 325.)

Include a Fresnel term so that the amount of reflected and refracted light at a transparent surface depend on the angle of incidence and index of refraction. (See Marschner Shirley p. 325.)

Implement spotlights. You'll have to extend the parser to handle spot lights but don't worry, this is low-hanging fruit.

Improve your refraction code to allow rays to refract correctly through objects that are contained inside other objects. You must put together a .ray file to demonstrate this effect.

Find a good way to accelerate shadow attenuation. Do you need to check against every object when casting the shadow ray? This one is hard to demonstrate directly, so be prepared to explain in detail how you pulled it off.

Deal with overlapping objects intelligently. While the skeleton code handles materials with arbitrary indices of refraction, it assumes that objects don't intersect one another. It breaks down when objects intersect or are wholly contained inside other objects. Add support to the refraction code for detecting this and handling it in a more realistic fashion. Note, however, that in the real world, objects can't coexist in the same place at the same time. You will have to make assumptions as to how to choose the index of refraction in the overlapping space. Make those assumptions clear when demonstrating the results.

x1

Add texture mapping support to the remaining built-in primitives (sphere, box, cylinder, cone). The square object already has coordinate mapping implemented for your reference. The most basic kind of texture mapping is to apply the map to the diffuse color of a surface. But many other parameters can be mapped. Reflected color can be mapped to create the sense of a surrounding environment. Transparency can be mapped to create holes in objects. Additional (variable) extra credit will be given for such additional mappings. The basis for this bell is built into the skeleton, and the parser already handles the types of mapping mentioned above. Additional credit will be awarded for quality implementation of texture mapping on general trimeshes.

Implement antialiasing by adaptive supersampling. For full credit, you must show some sort of visualization of the sampling pattern that results. For example, you could create another image where each pixel is given an intensity proportional to the number of rays used to calculate the color of the corresponding pixel in the ray traced image. Implementing this bell/whistle is a big win -- nice antialiasing at low cost.

x2

Add a menu option that lets you specify a background images cube to replace the environment's ambient color during the rendering. That is, any ray that goes off into infinity behind the scene should return a color from the loaded image on the appropriate face of the cube, instead of just black. The background should appear as the backplane of the rendered image with suitable reflections and refractions to it. This is also called environment mapping. Click here for some examples and implementation details, and here and here for some free cube maps (also called skyboxes).

x2

Implement support for multithreaded anti-aliasing. This means splitting up the anti-aliasing amongst multiple processess. You'll definitely need to make use of the ThreadPool.cpp class. This is probably not too difficult if you have taken Operating Systems. If not, check out some info on Qthreads and threading here. We need to see some code and/or a convincing performance demonstration for credit.

x2

Implement bump mapping. Check this out!

x2

Implement solid textures or some other form of procedural texture mapping. Solid textures are a way to easily generate a semi-random texture like wood grain or marble. Click here for a brief look at making realistic looking marble using Ken Perlin's noise function.

x2

Add some new types of geometry to the ray tracer. Consider implementing torii or general quadrics. Many other objects are possible here.

x2

Extend the ray-tracer to create Single Image Random Dot Stereograms (SIRDS). Click here to read a paper on how to make them. Or, create 3D images like this one, for viewing with red-blue glasses.

x2*

Implement Monte Carlo path tracing to produce one or more or the following effects: depth of field, soft motion blur, or glossy reflection. For additional credit, you could implement stratified sampling (part of "distribution ray tracing") to reduce noise in the renderings. (See lecture slides, Marschner Shirley 13.4).

*You will earn 2 bells for the first effect, and 2 whistles for each additional effect.

x2

Implement a more realistic shading model. Credit will vary depending on the sophistication of the model. A simple model factors in the Fresnel term to compute the amount of light reflected and transmitted at a perfect dielectric (e.g., glass). A more complex model incorporates the notion of a microfacet distribution to broaden the specular highlight. Accounting for the color dependence in the Fresnel term permits a more metallic appearance. Even better, include anisotropic reflections for a plane with parallel grains or a sphere with grains that follow the lines of latitude or longitude. Sources: Shirley, Chapter 24, Watt, Chapter 7, Foley et al, Section 16.7; Glassner, Chapter 4, Section 4; Ward's SIGGRAPH '92 paper; Schlick's Eurographics Rendering Workshop '93 paper.

This all sounds kind of complex, and the physics behind it is. But the coding doesn't have to be. It can be worthwhile to look up one of these alternate models, since they do a much better job at surface shading. Be sure to demo the results in a way that makes the value added clear.

Theoretically, you could also invent new shading models. For instance, you could implement a less realistic model! Could you implement a shading model that produces something that looks like cel animation? Variable extra credit will be given for these "alternate" shading models.Note that you must still implement the Blinn-Phong model.

x3

Add some higher-level geometry to the ray tracer, such as extrusions, metaballs, swept surfaces, or blend surfaces. You may have implemented one or more of these as a polygonal object in the modeler project. For the Raytracer, be sure you are actually raytracing the surface as a mathematical construct, not just creating a polygonal representation of the object and tracing that. Yes, this requires lots of complicated math, but the final results are definitely worth it (see Transparent Metaballs). Here is a really good tutorial on raytracing metaballs. For an additional bell, add texture mapping to your higher-level geometry. The texture mapping must look good in order to get credit for it!

x3

Implement 3D fractals and extend the .ray file format to provide support for these objects. Note that you are not allowed to "fake" this by just drawing a plain old 2D fractal image, such as the usual Mandelbrot Set. Similarly, you are not allowed to cheat by making a .ray file that arranges objects in a fractal pattern, like the sier.ray test file. You must raytrace an actual 3D fractal, and your extension to the .ray file format must allow you to control the resulting object in some interesting way, such as choosing different fractal algorithms or modifying the base pattern used to produce the fractal.

Here are two really good examples of raytraced fractals that were produced by students during a previous quarter: Example 1, Example 2 And here are a couple more interesting fractal objects: Example 3, Example 4

x4

Implement 4D quaternion fractals and extend the .ray file format to provide support for these objects. These types of fractals are generated by using a generalization of complex numbers called quaternions. What makes the fractal really interesting is that it is actually a 4D object. This is a problem because we can only perceive three spatial dimensions, not four. In order to render a 3D image on the computer screen, one must "slice" the 4D object with a three dimensional hyperplane. Then the points plotted on the screen are all the points that are in the intersection of the hyperplane and the fractal. Your extension to the .ray file format must allow you to control the resulting object in some interesting way, such as choosing different generating equations, changing the slicing plane, or modifying the surface attributes of the fractal.

Here are a few examples, which were created using the POV-Ray raytracer (yes, POV-Ray has quaternion fractals built in!): Example 1, Example 2, Example 3, Example 4. And, this is an excellent example from a previous quarter.

To get started, visit this web page to brush up on your quaternion math. Then go to this site to learn about the theory behind these fractals. Then, you can take a look at this page for a discussion of how a raytracer can perform intersection calculations.

x4

Implement CSG, constructive solid geometry. This extension allows you to create very interesting models. See page 108 of Glassner for some implementation suggestions. An excellent example of CSG was built by a grad student here in the grad graphics course.

x4

Implement caustics by tracing rays from the light source and depositing energy in texture maps (a.k.a., illumination maps, in this case). Caustics are variations in light intensity caused by refractive focusing--everything from simple magnifying-glass points to the shifting patterns on the bottom of a swimming pool. Here is a paper discussing some methods. 2 bells each for refractive and reflective caustics. (Note: caustics can be modeled without illumination maps by doing "photon mapping", a monster bell described below.)

Here is a really good example of caustics that were produced by two students during a previous quarter: Example

There are innumerable ways to extend a ray tracer. Think about all the visual phenomena in the real world. The look and shape of cloth. The texture of hair. The look of frost on a window. Dappled sunlight seen through the leaves of a tree. Fire. Rain. The look of things underwater. Prisms. Do you have an idea of how to simulate this phenomenon? Better yet, how can you fake it but get something that looks just as good? You are encouraged to dream up other features you'd like to add to the base ray tracer. Obviously, any such extensions will receive variable extra credit depending on merit (that is, coolness!). Feel free to discuss ideas with the course staff before (and while) proceeding!

Monster Bells


Disclaimer: please consult the course staff before spending any serious time on these. These are all quite difficult (I would say monstrous) and may qualify as impossible to finish in the given time. But they're cool.

Sub-Surface Scattering

The trace program assigns colors to pixels by simulating a ray of light that travels, hits a surface, and then leaves the surface at the same position. This is good when it comes to modeling a material that is metallic or mirror-like, but fails for translucent materials, or materials where light is scattered beneath the surface (such as skin, milk, plants... ). Check this paper out to learn more.

Metropolis Light Transport

Not all rays are created equal. Some light rays contribute more to the image than others, depending on what they reflect off of or pass through on the route to the eye. Ideally, we'd like to trace the rays that have the largest effect on the image, and ignore the others. The problem is: how do you know which rays contribute most? Metropolis light transport solves this problem by randomly searching for "good" rays. Once those rays are found, they are mutated to produce others that are similar in the hope that they will also be good. The approach uses statistical sampling techniques to make this work. Here's some information on it, and a neat picture.

Photon Mapping

Photon mapping is a powerful variation of ray tracing that adds speed, accuracy and versatility. It's a two-pass method: in the first pass photon maps are created by emitting packets of energy photons from the light sources and storing these as they hit surfaces within the scene. The scene is then rendered using a distribution ray tracing algorithm optimized by using the information in the photon maps. It produces some amazing pictures. Here's some information on it.

Also, if you want to implement photon mapping, we suggest you look at the SIGGRAPH 2004 course 20 notes (accessible from any UW machine or off-campus through the UW library proxy server).