CSE 557: Computer Graphics

Spring Quarter 1999



Project 4: Final Project

Date Assigned: 26 May 1999
Email Due: 28 May 1999
Date Due: 7 June 1999

Project Description

Free at last! You get to define your own final project. It should be ambitious enough to demonstrate a significant result, but not so hard that you aren't confident about shipping a reasonably complete product. The project ideas below provide a good target.

The final projects can be done in teams, but your contributions must be distinct - you must choose separate projects (e.g., from the list below). You can certainly start from the same codebase, and you can collaborate on an artifact. If for example you extend the same renderer, hopefully you can combine your extensions to produce a rendering. Or if one team member extends his or her subdivision modeler to edit new types of surfaces and the other extends the ray tracer to support the same type of primitive. If you would like to work in a team to do a larger project or an extension of one of the open-ended projects described below, please pitch your idea in your project email.

A brief (but detailed) project proposal is due on the 28th of May, emailed to your TA by TGIF (4:30).

Project Ideas

(Note that some of the pictures don't correspond to the algorithm suggested, though they all depict the idea described.)

Geometric modeling

Direct manipulation of subdivision surfaces. Let the user grab any point on the surface and move it to where they want it. What should the rest of the surface do? Since the idea of direct manipulation is to simulate direct physical interaction, the surface should probably move as though tugged. Any selected point on the surface can be expressed as some affine combination of control points. Given these weights you can use a heuristic to move the control points so that the surface point moves to match the user's control.

Jos Stam's SIGGRAPH 98 paper describes how to find control point weights which generate any point on a Catmull-Clark surface. In a paper included on the CDROM Proceedings he reproduces this result for Loop surfaces. Alternatively, by storing with each point on the surface a set of weights over its neighbors in the mesh one level up, you could avoid implementing Stam's algorithms.

Direct manipulation doesn't make too much sense in "edit lo res" mode because the goal is really to be able to edit the surface without thinking about the control mesh. Therefore it would be nice to only resubdivide the portion of the subdivision surface changed by an edit, so that you can achieve interactive speeds with a decent level of subdivision.

View-dependent adaptive level of detail for subdivision surfaces. Automatically adjust the amount of subdivision for the different areas of model using some view-dependent measure of error, and a tolerance (or target mesh size). For your error measure you could fit a plane to a vertex neighborhood and compute the distance from the vertex to that plane. Then project this vector screen to find the "screen space" error. Areas on the silhouette will be more higly subdivided. If any vertex exceeds the tolerance, subdivide up to the user-specified limit. More credit is available if you take apparent errors in shading into account (see, e.g., Hoppe's SIGGRAPH 97 paper on view-dependent progressive meshes).
Fair surfaces. Extend your subdivision editor to create fair surfaces that interpolate the points in the base mesh. You may have noticed that the Butterfly interpolating scheme is a little squirrely. For all except the simplest input meshes it shows ringing and bumpiness. A different approach to designing surfaces is to minimize some functional (e.g. the area integral of the variation in curvature) which penalizes excessive bumpiness. In his SIGGRAPH 95 paper Taubin describes a technique that combines subdivision with a smoothing filter to create fair surfaces interpolating selected points on the base mesh.
Editing other types of surfaces. Your subdivision modeler would be a reasonable starting point to create an editor for other types of surfaces, like the tensor product B-splines we've discussed in class or general swept surfaces (where a closed curve--the profile--is swept along another curve--the path--to generate a surface. Your editor should allow the user to create a surface of reasonable complexity and modify it using appropriate controls.

Rendering

Distribution ray tracing. Implement semi-diffuse reflections and refractions by distributing the secondary rays emanating from each surface according to a bidirectional reflectance distribution function (BRDF) of your own choosing. Allow slider control over one or more parameters of the BRDF. Stop ray recursion if the weight for a ray's color drops below a slider-selectable threshold. For extra fun (and credit), implement distribution ray tracing of an area light source to simulate penumbrae, or of a finite aperture to simulate depth of field, or of a finite shutter interval to simulate motion blur.
Texture mapping. Add texture mapping to your ray-tracer spheres, triangle meshes, and planar quadrilaterals (which you can represent as two triangles). Textures may be 2D or 3D, and may be procedurally generated, optically scanned, or borrowed from a friend.

To map a texture onto a surface, you must compute texture indices at each ray-surface intersection. Methods for spheres, quadrilaterals, and triangles are described by Eric Haines in sections 2.5 and 3.3 of chapter 2 in Glassner. For a triangle mesh, the methods listed above can be combined with a global mapping scheme based on vertex coordinates relative to a corner of the mesh or on angles relative to a projection point not located on the mesh.

Use your texture to modulate something besides just reflectance (i.e. color). Experiment with transparency, specularity, or orientation (i.e. bump mapping). Alternatively, try modulating an interpolation between two other textures or between two entirely different shading models. See Cook's Siggraph '84 paper on shade trees and Perlin's Siggraph '85 paper on texture synthesis (FvDFH, p. 1047) for ideas.

Ray-tracing geometric primitives. Implement a class of more complicated primitives (e.g., from Hanrahan's chapter in Glassner's book). Choose wisely. Quadrics are too easy; deformed surfaces are too hard. Recommended are general swept surfaces, bicubic patches, CSG models, or fractals. Fractals are relatively easy to implement and fun to use. For extra fun, map textures onto your patches or fractals. Other primitives that can be used for terrific visual effect are particle systems (e.g., for fire and water) and L-systems (great for modeling plants).

Modeling reflectance. Implement a more realistic shading model. Credit will vary depending on the sophistication of the model. A simple model factors in the Fresnel term to compute the amount of light reflected and transmitted at a perfect dielectric (e.g., glass). A more complex model incorporates the notion of a microfacet distribution to broaden the specular highlight. Accounting for the color dependence in the Fresnel term permits a more metalic appearance. See FvDFH, Section 16.7; Glassner, Chapter 4 for a discussion of these shading models.

Even better, include anisotropic reflections for a plane with parallel grains or a sphere with grains that follow the lines of latitude or longitude. The grains might also follow lines of constant (u,v) texture coordinates, or may be indicated by a texture map. Two papers that describe possible models for anisotropic are Ward's SIGGRAPH '92 paper and Schlick's Eurographics Rendering Workshop '93 paper.

Other more sophisticated shading models can incorporate the notion of subsurface scatter, such as light reflecting from human skin (Hanrahan and Krueger's Siggraph '93 paper), and thin film interference that creates rainbow patterns on CD's and some sunglasses (Gondek, Meyer, and Newman, Siggraph '94).

Light ray tracing. Model caustics by tracing rays "backwards" from light sources and accumulating incident illumination as a texture on each surface. This one is harder, but we'll reward you suitably. Look at Heckbert's paper on bidirectional ray tracing (Siggraph '90). As an alternative to storing illumination on surfaces, implement Veach's hybrid ray tracer, then try his variance reduction schemes (Siggraph '95).

Volume rendering. Start by implementing spatially inhomogeneous atmospheric attenuation. Divide each ray into intervals. For each interval, interpolate between the ray's color and some constant fog color based on a procedurally computed opacity for that location in space. Experiment with opacity functions. Once you get this working, try defining a solid texture (probably procedurally) that gives color and opacity for each interval. See Perlin and Hoffert's and Lewis's Siggraph '89 papers on solid texture synthesis and Kajiya and Kay's teddy bear paper (also Siggraph '89) for ideas. For smoke and clouds, you should also cast a ray to the light source to capture (one bounce of) the scattering. If you want to make your volume renderer fast, use hierarchical spatial subdivision (e.g. an octree).

Hierarchical radiosity. This is actually not as hard as it sounds. To implement the Hanrahan hierarchical solver, the main ingredients are a simple polygonal scene, a routine to break a triangle or rectangle into a few smaller pieces, a patch to patch visibility solver (your ray tracer already does this), and a simple mechanism for traversing a hierarchy of polygons. The simplest way to visualize your results would be to write out a file in the obj format accepted by your subdivision editor extended with an rgb triple for each point in addition to the xyz values, and extend your subdivision modeler to read and render the vertex color. Alternatively, you can do a "final gather" ray-tracing pass to achieve smooth shading. For best results, combine the radiosity solution with a ray-tracing pass that incorporates specular highlights and textures.
Panorama ray-tracer and viewer. Your ray tracer only samples a subset of the sphere of directions around the camera. Instead, you could send rays in all directions. Several representations exist for this sort of image, including spherical and cylindrical (if you don't need to look straight up) maps. Perhaps the best is to simply use the 6 faces of a cube. After rendering such a panorama given a particular look direction and focal length you can reproject to create a normal image. Reprojection can be performed quickly enough to let the user interactively adjust the camera's orientation and focal length.

Other

Painting on surfaces. Create a paint program to paint on 3D surfaces a la Hanrahan and Haeberli (SIGGRAPH 90). You can fairly easily partition a subdivision surface into a set of regions with a map from each region to the unit square. (Map each base mesh triangle to a triangle in the unit square. Then simply split the texture coordinates to get coordinates for new vertices. You don't need to perform averaging on the texture coordinates.) Then the surface can be covered with a set of texture maps. Let the user adjust their view on the surface and then give them some normal 2D paint tool(s). Incorporate any edits they make into the surface's texture maps to give the effect of painting on the surface.

Demos

You should prepare in advance for the demo, since you'll be the expert, not us. If you choose a rendering project you should have precomputed some images. If you project is interactive you may not need precomputed images, but you'll want to have inputs ready that effectively demonstrate your work. In any case, please generate an artifact to show off your project. It can be a rendering or a model or a visualization of your results. Remember that two people who worked on different projects can team up to create an artifact.

Grading will be based not so much on the amount of work you have done, but on cleverness, creativity, and correctness.