Bells and Whistles
You are encouraged to use the books in the lab and on reserve, the Foley, et al. book,
your Alan Watt text, and Glassner's book are all useful. For credit, you must be able to
demonstrate the functionality of any extras you've implemented. Because it can be
difficult to determine the specific effect of a particular added feature, you will need
to add controls to selectively enable and disable your extensions. It may also be necessary
to write new image files which will demonstrate your extension.
Implement antialiasing (Foley, et al. 15.10.4, Watt chapter 14, and Glassner 4.3 - 4.7).
Depending on the complexity of the method you implement, this could be worth one
.
Implement
an adaptive termination criterion for tracing rays, based on ray
contribution. Control the adaptation threshold with a slider.
Modify your antialiasing to
implement the first stage of distribution
ray tracing by jittering the sub-pixel samples. The noise
introduced by jittering should be evident when casting 1 ray per pixel.
Implement spot
lights.
Improve your refraction
code to allow rays to refract correctly through objects that are contained inside other objects. You must put
together a .ray file to demonstrate this effect.
Add
code for triangle intersections. Some of this is already done for you in
trimesh.cpp, and doing this can add a lot of functionality to your program. You need to
implement this in order to view any of the scenes in the scenes\polymesh directory.
Add a menu option
that lets you specify a background image to replace the environment's
ambient color during the rendering. That is, any ray that goes off into
infinity behind the scene should return a color from the loaded image, instead
of just black. The background should appear as the backplane of the
rendered image with suitable reflections and refractions to it. This is also called environment mapping. Click here for some examples.
Find a good way to
accelerate shadow attenuation. Do you need to check against every
object when casting the shadow ray? This one is hard to demonstrate
directly, so be prepared to explain in detail how you pulled it off.
Deal with overlapping
objects intelligently. While the skeleton code handles materials
with arbitrary indices of refraction, it assumes that objects don't intersect
one another. It breaks down when objects intersect or are wholly contained
inside other objects. Add support to the refraction code for detecting this
and handling it in a more realistic fashion. Note, however, that in the
real world, objects can't coexist in the same place at the same time. You will
have to make assumptions as to how to choose the index of refraction in the
overlapping space. Make those assumptions clear when demonstrating the
results.
Implement antialiasing by adaptive supersampling, as described in
Foley, et al., 15.10.4. For full credit, you must show some sort of
visualization of the sampling pattern that results. For example, you
could create another image where each pixel is given an intensity proportional
to the number of rays used to calculate the color of the corresponding pixel
in the ray traced image. Implementing this bell/whistle is a big win --
nice antialiasing at low cost.
Implement more versatile lighting controls, such as the Warn model
described in Foley 16.1.5. This allows you to do things like control the shape
of the projected light.
Add texture mapping support to the program. To get full credit for this, you must add texture mapping support
for all the built-in primitives (sphere, box, cylinder, cone) except trimeshes. The square object is already done
for you.
The most basic kind of
texture mapping is to apply the map to the diffuse color of a surface. But
many other parameters can be mapped. Reflected color can be mapped to create
the sense of a surrounding environment. Transparency can be mapped to create
holes in objects.
Additional (variable) extra credit will be given for such
additional mappings. The basis for this bell is built into the skeleton,
and the parser already handles the types of mapping mentioned above.
Additional credit will be awarded for quality implementation of texture
mapping on general trimeshes.
Implement bump mapping (Watt 8.4; Foley, et al. 16.3.3). Check this out!
Implement solid textures or some other form of procedural texture
mapping, as described in Foley, et al., 20.1.2 and 20.8.3. Solid textures
are a way to easily generate a semi-random texture like wood grain or marble. Click here for a tutorial on making realistic looking marble using Ken Perlin's noise function.
Add some new types of geometry to the ray tracer. Consider implementing
torii or general quadrics. Many other objects are possible here.
Add support for height-fields. Click
here for a discussion on what they are and how they can be generated.
for first, for
each additional
Implement distribution ray tracing to produce one or more or the following
effects: depth of field, soft shadows, motion blur, or glossy reflection
(See Watt 10.6, Glassner, chapter 5, or Foley, et al., 16.12.4).
Add some higher-level
geometry to the ray tracer, such as surfaces of revolution, extrusions,
metaballs, swept surfaces, or blend surfaces. You may have implemented one or more of
these as a polygonal object in the modeler project. For the Raytracer,
be sure you are actually raytracing the surface as a mathematical construct,
not just creating a polygonal representation of the object and tracing that.
Yes, this requires lots of complicated math, but the final results are
definitely worth it (see Transparent Metaballs).
Here is a really good tutorial
on raytracing metaballs. For an additional bell, add texture mapping to your higher-level
geometry. The texture mapping must look good in order to get credit for it!
Implement
ray-intersection optimization by either significantly
extending the BSP Tree implemented in the skeleton or by implementing a
different optimization method, such as hierarchical bounding volumes (See
Glassner 6.4 and 6.5, Foley, et al., 15.10.2).
Implement
3D fractals and extend the .ray file format to provide support for these objects. Note that
you are not allowed to "fake" this by just drawing a plain old 2D fractal image, such as the usual Mandelbrot Set.
Similarly, you are not allowed to cheat by making a .ray file that arranges objects in a fractal pattern, like the sier.ray test
file. You must raytrace an actual 3D fractal, and your extension to the .ray file format must allow you to control the resulting
object in some interesting way, such as choosing different fractal algorithms or modifying the base pattern used to produce
the fractal.
Here are two really good examples of raytraced fractals that were produced by students during a previous quarter:
Example 1,
Example 2
And here are a couple more interesting fractal objects:
Example 3,
Example 4
Implement
4D quaternion fractals and extend the .ray file format to provide support for these objects.
These types of fractals are generated by using a generalization of complex numbers called quaternions. What makes the fractal really interesting is that it is actually a 4D object. This is a problem because we can only perceive three spatial dimensions, not four. In order to render a 3D image on the computer screen, one must "slice" the 4D object with a three dimensional hyperplane. Then the points plotted on the screen are all the points that are in the intersection of the hyperplane and the fractal. Your extension to the .ray file format must allow you to control the resulting object in some interesting way, such as choosing
different generating equations, changing the slicing plane, or modifying the surface attributes of the fractal.
Here are a few examples, which were created using the POV-Ray raytracer
(yes, POV-Ray has quaternion fractals built in!):
Example 2,
Example 3,
Example 4,
Example 5
To get started, visit this web page to brush up on your
quaternion math. Then go to this site to learn
about the theory behind these fractals. By now you should have a high level overview of what's going on, so you are ready to
take the plunge and dive into this hardcore web page, which contains
a practical discussion of how quaternion fractals are implemented in POV-Ray.
Here's a movie of a quaternion fractal spontaneously appearing in our own Suzzallo library!
Implement a more realistic shading model. Credit will vary depending on
the sophistication of the model. A simple model factors in the Fresnel term to
compute the amount of light reflected and transmitted at a perfect dielectric
(e.g., glass). A more complex model incorporates the notion of a microfacet
distribution to broaden the specular highlight. Accounting for the color
dependence in the Fresnel term permits a more metallic appearance. Even
better, include anisotropic reflections for a plane with parallel grains or a
sphere with grains that follow the lines of latitude or longitude. Sources:
Watt, Chapter 7, Foley et al, Section 16.7; Glassner, Chapter 4, Section 4;
Ward's SIGGRAPH '92 paper; Schlick's Eurographics Rendering Workshop '93
paper.
This all sounds kind of complex, and the physics behind it is. But the
coding doesn't have to be. It can be worthwhile to look up one of these
alternate models, since they do a much better job at surface shading. Be
sure to demo the results in a way that makes the value added clear.
Theoretically, you could also invent new shading models. For
instance, you could implement a less realistic model! Could you
implement a shading model that produces something that looks like cel
animation? Variable extra credit will be given for these "alternate"
shading models. Links to ideas: Comic
Book Rendering,
Note that you must still implement the Phong model.
Implement CSG, constructive solid geometry. This extension allows you
to create very interesting models. See page 108 of Glassner for some
implementation suggestions. An excellent
example of CSG was built by a grad student here in the grad graphics
course.
Add a particle systems simulation and renderer (Foley 20.5, Watt 17.7,
or see instructor for more pointers).
Implement
caustics. Caustics are variations in light intensity caused by
refractive focusing--everything from simple magnifying-glass points to the
shifting patterns on the bottom of a swimming pool. A
paper
discussing some methods, and some
images.
There are innumerable ways to extend a ray tracer. Think about all the
visual phenomena in the real world. The look and shape of cloth. The texture
of hair. The look of frost on a window. Dappled sunlight seen through the
leaves of a tree. Fire. Rain. The look of things underwater. Prisms. Do you
have an idea of how to simulate this phenomenon? Better yet, how can you fake
it but get something that looks just as good? You are encouraged to dream up
other features you'd like to add to the base ray tracer. Obviously, any such
extensions will receive variable extra credit depending on merit (that is,
coolness!). Feel free to discuss ideas with the course staff before (and
while) proceeding!
Disclaimer: please consult the course staff before spending any serious
time on these. These are all quite difficult (I would say monstrous)
and may qualify as impossible to finish in the given time. But they're cool.

Sub-Surface Scattering
The trace program assigns colors to pixels by simulating a ray of light
that travels, hits a surface, and then leaves the surface at the same
position. This is good when it comes to modeling a material that is
metallic or mirror-like, but fails for translucent materials, or materials
where light is scattered beneath the surface (such as skin, milk, plants... ).
Check this
paper out to learn more.

Metropolis Light Transport
Not all rays are created equal. Some light rays contribute more to
the image than others, depending on what they reflect off of or pass through
on the route to the eye. Ideally, we'd like to trace the rays that have
the largest effect on the image, and ignore the others. The problem
is: how do you know which rays contribute most? Metropolis light
transport solves this problem by randomly searching for "good"
rays. Once those rays are found, they are mutated to produce others that
are similar in the hope that they will also be good. The approach uses
statistical sampling techniques to make this work. Here's some information
on it, and a neat picture.

Photon Mapping
Photon mapping is a powerful variation of ray tracing that adds speed,
accuracy and versatility. It's a two-pass method: in the first
pass photon maps are created by emitting packets of energy (photons) from the
light sources and storing these as they hit surfaces within the scene.
The scene is then rendered using a distribution ray tracing algorithm
optimized by using the information in the photon maps. It produces some
amazing pictures. Here's some
information on it.
Also, if you want to implement photon mapping, we suggest you look at the
SIGGRAPH 2001 course 38 notes. The Ta's can point you to a copy, if you
are interested.
|