Project 3: Animator


Assigned Friday, October 27
Due Thursday, November 9 by 11:59pm
Artifact Due Thursday, November 16 by 8:00am
Help Sessions
Dropbox Dropbox
Project TA Edward Zhang
Artifact Turn-in
Artifact Winners Winners

Overview


Description

Animator is a 3D modelling program that lets you design and edit 3D scenes, similar to tools such as Maya, 3DS Max, and Blender. In Animator, you can set up and arrange 3D elements in a scene, as well as determine their appearance based on the scene's lighting conditions. In addition, you will be able to animate your scene, that is, specify how the scene's contents move and change over time.

You'll be implementing many of the core elements of this program, spanning the areas of geometric modelling (specifying the basic 3D shapes of objects in the scene), geometry processing (editing or refining the 3D shapes of objects to achieve certain goals), hierarchical scene modelling (specifying the relative arrangement of objects in the scene), shading (specifying the appearance of the objects), keyframe animation (smoothly interpolating between scene configurations at different points in time), and physical simulation (animating scene elements based on physics instead of manually). There are seven total requirements:

This project/application might also be referred to in the text below as Modeler. In the undergraduate offerings of this class, this assignment is actually split into two projects, but for 557 Modeler and Animator are interchangeable terms for the same application.

Getting Started

Visit here for help checking out code.

Download and get familiar with the Sample Solution for a good idea of how everything works. See Program Usage for details.

If you'd like an overview of what shaders are, visit here.

Requirements


Implement the features described below. The skeleton code has comments marked with // REQUIREMENT denoting the locations at which you probably want to add some code.

Program Usage


The Hierarchy in the left pane represents all the objects in the current Scene. A child object inherits the transformations applied to the parent object like in a scene graph. Parent-Child relationships can changed by simply dragging the objects around in the pane. You may also find creating Empty objects as parents to be useful in building your model. Double click an object to change its name.

The Assets tab in the left pane represents things like Textures, Shaders, Materials, and Meshes used by the Scene. Together, the Assets and Scene Graph form a "Scene", and can be saved out or loaded from disk.

Selecting an asset or an object will display their properties on the right side in the Inspector. Here you can change their properties, assign different textures, materials, etc. For scene objects, you can also hide properties that should not be changed.

Modeler uses a component based system for scene objects. Every object has a "Transform" component that represents their Translation, Rotation, and Scale in 3d space. Most rendered objects will have a "Geometry" that defines the shape of the object, and a "Mesh Renderer" that uses a "Material" asset to define how to render that shape. Lights will have an additional "Light" component that defines the light properties.

Console at the bottom is mostly for debugging purposes, at any time in your code, you can call Debug::Log.WriteLine to print to this console. If you hide the Inspector or any of the other panels in the program, right-click on the tool-bar to show them again.

Scene in the middle is a rendering of your Scene Graph. You can change how its rendered as points, wireframe, or fully shaded. If you are having trouble with the orientation of the Perspective view, try switching to an Orthographic view.

Camera Controls:

Scene Controls:

Creating a new shader:

Skeleton Program


The Modeler codebase is quite substantive (not so much a skeleton this time around). It's a good idea to get an understanding of what's going on.

Modeler Arch

Modeler has two major components: the Engine and the UI. For the requirements, you will most likely only be concerned with the Engine unless you attempt a bell or whistle that goes above and beyond what is currently supported. Modeler loads one Scene at a time. Each Scene has an Asset Manager that handles loading all the Assets belonging to the Scene. It also owns all the Scene Objects in the scene, which are stored in a map using unique identifiers. A Scene Object contains a mixture of Components that define some behaviour. For instance, a Transform Component which defines the Scene Objects transformations + a Point Light Component which defines light properties, makes a Point Light. Components are built from Properties that are able to communicate responsively with the UI, and can be serialized into the file format. A Renderer takes a Scene and does a depth-first traversal of the Scene Objects that comprise the Scene Graph and renders each component that is renderable. It has its own Resource Manager that handles caching GPU versions of assets.

For a more in-depth explanation on the codebase, please visit this document.

Surface of Revolution


In OpenGL, all scenes are made of primitives like points, lines, and triangles. Most 3d objects are built from a mesh of triangles. In this project, you will implement a surface of revolution by creating a triangle mesh. For each mesh, you define a list of vertices (the points that make up the primitive) with normals and texture coordinates, and an array indices specifying triangles. This is then later used by the OpenGL Renderer through the method glDrawElements. See opengl/glmesh.cpp.

Surface Normals

Surface normals are perpendicular to the plane that's tangent to a surface at a given vertex. Surface normals are used for lighting calculations, because they help determine how light reflects off of a surface.

In OpenGL, we often want to approximate smooth shapes like spheres and cylinders using only triangles. One way to make the lighting look smooth is to use the normals from the shape we're trying to approximate, rather than just making them perpendicular to the polygons we draw. This means we calculate the normals for each vertex (per-vertex normals), rather than each face (per-face) normals. Normals are supplied to OpenGL in a giant array in the same order the vertex positions array is built. Shaders allow us to get even smoother lighting, calculating the normals at each pixel. You can compare these methods below:

Per-face
Per-face
Per-vertex
Per-vertex
Per-pixel
Per-pixel

Texture Mapping

Texture mapping allows you to "wrap" images around your model by mapping points on an image (called a texture) to vertices on your model. For each vertex, you indicate the coordinate that vertex should apply to as a 2D pair (U, V) or (S, T) where U or S is the X-coordinate and V or T is the Y-coordinate of the point on the texture that should line up with the vertex. UVs are passed as a giant array in the same manner normals and vertex positions are:

Using Textures In Your Model

When you want to use a texture, you'll need to do the following:

  1. Import a texture into your scene

  2. Create a Shader Program that utilizes shaders that sample from textures

  3. Create a Material that uses that Shader Program, and set the textures

Hierarchical Modelling


Model Requirements

For the artifact, you will create a Hierarchical Model. Create a hierarchy of nodes, a combination of Empty nodes and Shape nodes. In Animator, you will end up animating your Model(or you can create a new one) by manipulating the set of transforms and other properties on all these nodes over time. Hide any properties you do not want exposed with the Inspector's "Edit Properties" on each node.

While it does not have to be a masterpiece, we do require that it has at least two levels of branching. What this means is that if you have a torso, and attach two arms to it, this is one level of branching. The torso splits into each arm. If each arm then has three fingers, that is two levels of branching, since the arm splits into fingers. Note that if you only have one finger, then that does not add additional branching!

Tree Diagram

Your tree diagram should illustrate the hierarchy of nodes that make up your model, and describe the transforms that each node has. The transforms should clearly indicate which values are variable (i.e. can be animated) and which are constant. These diagrams can be hand-drawn or created with a program (LaTex, Powerpoint, or similar). With your initial diagram, also include some visual to help the TA's know what you're trying to model. This can be a rough sketch, a diagram, an real-world image with some lines and annotations on top, or if you're ahead, a screenshot of a rendering.

Here is a very barebones example of the tree diagram requirement. Consider the simplest arm model, consisting of two long boxes with a single joint in the center. Note that this example does not have the required amount of branching.

Simple Hierarchy
An A+ diagram for this model would look like this:
Simple Hierarchy Tree Diagram

You would get full correctness credit for having the same text and hierarchy structure as shown. The color-coding and layout, as well as the A1 and A2 metanodes, make this diagram easier to interpret, and thus push it above and beyond.

Model UI

When demoing your model, or when keyframing it for your animation, you don't want to be clicking down into different hierarchy nodes all the time in order to reach the right properties. You could instead have a master set of sliders that you can easily manipulate.

For example, in the simple hierarchy above, clicking down into the "A2 Container" node each time you want to change the joint angle is somewhat tedious (and will only get harder with more complex hierarchies). For this requirement, you will add some UI elements to control the variables in your hierarchical model transformations.

The easiest way to do this is to create an empty node at the top level of your hierarchy, which will serve as a container for your Hierarchy UI. Then, we can implement a "HierarchyUI" Component (see Skeleton), and add this component to the empty container. Here is a step-by-step overview of how to do this (pay special attention to steps 3 and 4):

  1. Create a new class in Engine/src/scene/components that inherits from Component.
  2. Add the new ComponentType to the enum in enum.h, the string mappings in component.cpp, and the includes in components.h.
  3. Add appropriate properties to the Component for each variable in your hierarchy. Refer to other components (e.g. Engine/src/scene/components/camera.h) for examples. The Editor will automatically create a UI element that controls each property, e.g. a slider for RangeProperty, text box for DoubleProperty, file selector for FileProperty, etc. Look at the property types in Engine/src/properties for the different types.
  4. Add a handler for the ValueChanged signal for each of your properties. This handler should update your hierarchy's transformations since it will be called every time the user changes the value of the property. You can use the SceneObject::FindDescendant function to help. This will probably require you to have passed in the scene root somewhere, e.g. in the constructor.
  5. Implement the rest of the Component interface by implementing the Component::GetType, Component::GetBaseType, and Component::DuplicateComponent functions. These should all be trivial.
    • You may also want to override the Component::Serialize function if you would like to be able to save the property values in your component.
    • If you wish to do this, you will also need to add your component type into the SceneObject::AddComponent(ComponentType) function.
  6. Find a way to actually add this component to a node in your scene. One way to do this is to have the user manually add the HierarchyUI to a node. To do this, create a QAction in MainWindow::CreateSceneObjectsActions that adds a component to the selected node (see the example for the QAction add_envmap_action), and then add the QAction to the appropriate menu in MainWindow::CreateMenus

Using this HierarchyUI, you can also have more complex dependencies between node transformations. For example, if you want to use the same variable to control multiple joints symmetrically, you can do that in the OnValueChanged function.

If you want to implement some UI elements that don't classify as one of the existing properties (e.g. a button to execute an action), then you might have to implement a new Property type. Here are some places to look:

This HierarchyUI component is a very specialized component, since it is dependent on a predefined scene structure. Usually, components are self contained and function specifically on the node itself.

Blinn-Phong Shader


A shader is a program that controls the behavior of a piece of the graphics pipeline on your graphics card.

Shaders determine how the scene lighting affects the coloring of 3D surfaces. There are two basic kinds of lights:

A shading model determines how the final color of a surface is calculated from a scene's light sources and the object's material. . See lecture notes for details on the Blinn-Phong shading model.

Shadow Mapping

Shadow mapping involves two steps: Constructing the shadow maps, and computing shadowing terms.

Our framework includes a prerender pass each frame, where we compute one shadow map for each point light in the scene. Specifically, the prerender pass traverses the hierarchy looking for nodes with EnvironmentMap components. An EnvironmentMap component indicates that we want to store some sort of cubemap around the node center. The prerender pass will then render the scene (minus the node itself and its children) 6 times, one for each face of the cube, so that every outgoing direction from the node center is covered. The faces of this cubemap are oriented according to the world coordinate system.

This EnvironmentMap component can be used for environment mapping as well as simple reflection and refraction effects. You can set the resolution of each face of the environment map, as well as the near plane and far plane that it renders from. Most importantly for shadow mapping, you can also set the Render Material for the cubemap. If the Render Material is set, then every object in the scene will be rendered to the cubemap using that set of shaders instead of their own default shader. For shadowmaps, you want to store not the actual lit appearance of the scene in each direction, but the distance of the nearest object from the cubemap center in each direction. You will need to write this depth shader.

The program assumes that any EnvironmentMaps for point light nodes are shadowmaps, and will automatically pass them into your Blinn-Phong shader under uniform samplerCube point_light_shadowmaps[4]. In your lighting code, you will then have to compute the distance to the point light and then compare it to the distance stored in the shadowmap; if the point you're lighting is farther away from the light than the value stored in the shadowmap, then your point is in shadow. Note that texture() lookups from cubemaps are simply done using a direction (in the cubemap's coordinate system, which we have enforced to match the world coordinate system). See this page for more details (ignore the discussion about gsamplerCubeShadow for this project).

To prevent a surface from shadowing itself due to precision errors, you should create a user-controllable shadow bias. The surface should be unshadowed if the distance stored in the shadowmap is nearly equal to (i.e. within this bias amount from) the distance from the shaded point to the light.

To actually see some shadows, you will then have to perform the following steps:

  1. Create your depth-mapping fragment and vertex shaders
  2. Create a new shader (Assets > Create > Shader Program) and point it to your fragment and vertex shaders
  3. Create a new material (Assets > Create > Material) and point it to the new Shader Program
  4. Add a point light to the scene
  5. Select the point light's node, then click the SceneObject menu and select "Add EnvironmentMap to selected"
  6. With the point light selected, look at the insepctor, under which you should see the EnvironmentMap properties. Set these as appropriate, making sure to point the RenderMaterial to your newly created material
  7. Ensure that your shadow receivers are using a material that has your shadow-mapping code in it (just using your blinn-phong material should work).

You can automate most of this (as is done in the reference solution), such that point lights are always created (Editor/src/mainwindow.cpp) with an EnvironmentMap component pointing to an automatically loaded (Engine/src/assets/assetmanager.h) DepthMap material.

Additional Shaders


These are additional shader ideas that you can create. You are required to create another shader(s) worth at least 3 whistles (or 1.5 bells). Additional bells or whistles are extra credit.

You can use the sample solution Modeler to develop some of these shaders, but others require texture maps to be provided. We have provided shader_textured.frag and shader_textured.vert as reference for you on how to include texture data into your image.

See below for instructions on how to use these in your model.

Alpha Test Shader

Some geometry has complex silhouettes but flat, planar surfaces. Rather than creating numerous triangles, we can use an alpha threshold or color key to discard certain pixels on a texture. This technique is especially useful for foliage. For an extra whistle, make it cast alpha-tested shadows too.

Reflection Shader

Create a shader that simulates a reflection effect by determining a reflected ray direction and looking up that ray in an environment map. To implement this, note that any geometry node with an EnvironmentMap component will provide that cubemap to its shaders via uniform samplerCube environment_map. See the shadow mapping section above for more details on obtaining the environment map. Regarding physical accuracy, the same caveat about the distant scene assumption for refraction (see below) applies here.

Refraction Shader

Create a shader that simulates a refraction effect by determining a refracted ray direction and looking up that ray in an environment map. See the shadow mapping section above for more details on obtaining the environment map.

Note that there are two reasons why this isn't an accurate refraction shader. First, a true refracted ray should get refracted once upon entering the object and once upon leaving it, while here we can only refract the ray once. Second, the environment map is only an approximation (a so-called "distant scene approximation") of the incident light on the object. For example, a ray pointing in direction v at the top of the object will look up the same environment map value as a ray pointing in direction v at the bottom of the object, but the true incident light along those rays will only be equal if the surface in direction v is infinitely far away from the object center.

Spot Light Shader

Create a shader that supports a spot light source, and add a third light source to your Modeler. We should be able to adjust the spot light parameters via the UI.

Cartoon Shader

Create a shader that produces a cartoon effect by drawing a limited set of colors and a simple darkening of sillouettes for curved objects based on normal and viewing direction at a pixel. This per-pixel silhouette-darkening approach will work well in some cases around curved surfaces, but not all. Additional credit will be given based on how well the silhouettes are done, and how well the cartoon effect looks.

Schlick Shader

Create a shader, and sliders to control it, thatnuses the Schlick approximation to approximate the contribution of the Fresnel factor to the specular reflection of light.

Vertex Shader

Vertex shaders, instead of transformation matrices, can be used to morph and deform geometry in complex ways, and it can be really efficient since the calculations are run on the GPU. See here, here, and here for examples of interesting geometry deformations done with vertex shaders. And see here for an even more impressive example: the swimming animation is done entirely by manipulating vertex positions in the vertex shader. Add at least one slider that deforms geometry in a useful way by changing vertex positions (and normals, if relevant) within a vertex shader.

Tessellated Procedural Shader

Make a shader that produces an interesting, repeating pattern, such as a brick pattern, without using a texture.

Normal Mapping Shader

This shader uses a texture to perturb the surface normals of a surface to create the illusion of tiny bumps, without introducing additional geometry. Along with the normals, you'll also want your vertex shader to take in tangent and binormal vectors.

Diffraction Shader

Create a shader that produces a diffraction effect when you move around the object.

x2 Anisotropic Shader

Create a shader that produces anisotropic specular highlighting, creating a shiny metal appearance. Additionally, add sliders to control the magnitude in 2 perpendicular directions.

x3 Cloud / Noise Shader

Create a shader that uses noise functions (like Perlin noise) to generate clouds. You may not use textures for this shader. Credit depends on the realism of the clouds.

Using Shaders In Your Model

Shader files are loaded, compiled, and linked by ShaderProgram objects. If you want to add a shader:

  1. Go to Assets->Create->Shader Program to create a Shader Program.

  2. Find the new shader in the Assets pane and set the Vertex/Fragment shaders to point your shader files.

  3. Similarly create a new material and set it to use the Shader Program you created.

Tip: If you have an error in your shader code, you do not have to restart modeler. Instead, fix your shader, then just set Shader Program to point to the same shader file again.

Animator Widget Interface


After selecting a model parameter in the tree on the left, the parameter's corresponding animation curve is displayed in the graph. The skeleton evaluates each spline as a simple piecewise linear curve that linearly interpolates between control points. Note that when you select a curve, it scales to the max and min value you have set as control points to "fit" into the graph window. You can manipulate the curve as follows:

Command Action
LEFT MOUSE Selects and moves control points by clicking and dragging on them
SHIFT + LEFT MOUSE Selects multiple control points
DRAG LEFT MOUSE Rectangle-selects multiple points
DRAG RIGHT MOUSE Pans the graph
SCROLL Zooms the graph in X and Y dimensions
SHIFT + SCROLL Zooms only the X axis (time in frames)
SHIFT + SCROLL Zooms only the Y axis (parameter value)
F Fits the graph view to the current control points
K Creates a keyframe (control point) at the current time position
DELETE / FN + BACKSPACE Removes the selected control point

 

At the bottom of the window is a simple set of VCR-style controls and a time slider that lets you play, pause, and seek in your animation. Along the menu bar of the Animation window, you can change a few settings:

  • Curve Type: Chooses how to interpolate between your control points, or keyframes. The skeleton currently uses linear interpolation for each, which you will have to fix (see below: Skeleton Code for Curves).
  • Wrap Curve: Interpolates between the last and first control points in your graph so that your motion can loop smoothly. Linear curve wrapping is implemented for you; implementing curve-wrapping for the other curve types is a whistle of extra credit each.
  • Loop: Automatically loops playblack of your animation when it reaches the end.
  • Keyframe: Creates a keyframe (control point) at the current time position.
  • Set Length: Sets the length in seconds. NOTE: If you decrease the length of your animation, any keyframes that were set past that time length are deleted.
  • Set FPS: Sets the frames per second for playback of your animation.

Skeleton Code for Curves


The AnimationWidget object owns a bunch of Curve objects. The Curve class is used to represent the time-varying splines associated with your object parameters.  You don't need to worry about most of the existing code, which is used to handle the user interface.  However, it is important that you understand the curve evaluation model. Each curve is represented by a vector of evaluated points, calculated from a vector of control points.

std::vector<ControlPoint*> control_points_;

The user of your program can manipulate the positions of the control points using the Animation Widget interface. Your code will compute the value of the curve at intervals in time, determining the shape of the curve. Given a set of control points, the system figures out what the evaluated points are based on the curve type.

This conversion process is handled by the CurveEvaluator member variable of each curve. Classes that inherit from CurveEvaluator contain an EvaluateCurve function; this is what you must implement for the required curve evaluators: Bezier, Catmull-Rom, and B-Spline.  C2-Interpolating curves can be added for extra credit. 

In the skeleton, only the LinearCurveEvaluator has been implemented, and each other evaluator currently acts as a linear curve. Consequently, the curve drawn is composed of line segments directly connecting each control point.  Use LinearCurveEvaluator as a model to implement the other required curve evaluators. 

CurveEvaluator::EvaluateCurve

This function returns a vector of the evaluated points and takes the following parameters:

ctrl_pts--a collection of control points that you specify in the curve editor
animation_length--the largest time, in seconds, for which a curve may be defined (i.e., the current "movie length")
wrap--a flag indicating whether or not the curve should be wrapped (wrapping can be implemented for extra credit)

For Bezier curves (and the splines based on them), it is sufficient to sample the curve at fixed intervals of time. The adaptive de Casteljau subdivision algorithm presented in class may be implemented for an extra bell.

Catmull-Rom and B-spline curves should be endpoint interpolating. This can be done by doubling the endpoints for Catmull-Rom and tripling them for B-spline curves.

You do not have to sort the control points or the evaluated curve points. This has been done for you. Note, however, that for an interpolating curve (Catmull-Rom), the fact that the control points are given to you sorted by x does not ensure that the curve itself will also monotonically increase in x. You should recognize and handle this case appropriately.  One solution is to return only the evaluated points that are increasing monotonically in x.

Also, be aware that the evaluation function will linearly interpolate between the evaluated points to ensure a continuous curve on the screen.  This is why you don't have to generate infinitely many evaluated points.

Particle System Simulation


UI

The particle system as a whole, sphere colliders, and plane colliders have been added to the Animator UI as SceneObjects. You may add them to your scene the same way you add 3D Objects, and their properties will appear in the right-hand Inspector window. For particle systems, sliders are included for Period, controlling how often particles are emitted, and Restitution, the constant you should use in calculating collision force attenuation.

You'll also notice Sphere properties in the Inspector. The skeleton currently uses sphere primitives as the Geometry component for particles. If you'd like to change that, modify what component is added inside the Scene::CreateParticleSystem method.

Skeleton Code

The skeleton code has a very high-level framework in place for running particle simulations that is based on Witkin's Particle System Dynamics.  In this model, there are three major components:

  1. Particle objects (which have physical properties such as mass, position and velocity)
  2. Forces
  3. An engine for simulating the effect of the forces acting on the particles that solves for the position and velocity of each particle at every time step

You are responsible for coming up with a representation for particles and forces. You will be computing the forces acting on each particle and updating their positions and velocities based on these forces using Euler's method. Make sure you thus model particles and forces in some way that allows you to perform this update step at each frame.

The skeleton provides a very basic outline of a simulation engine, encapsulated by the ParticleSystem class within scene/components.  The model_matrix_ field is provided for you to use in converting local coordinates to world space. The skeleton also already includes PlaneCollider and SphereCollider objects, but you will need to implement the collision detection code within the ParticleSystem class.

Note that the ParticleSystem declaration is by no means complete.  As mentioned above, you will have to figure out how you want to store and organize particles and forces, and as a result, you will need to add member variables and functions. If you want to provide further UI controls or more flexibility in doing some very ambitious particle systems, you can also search for how the interface is used and re-organize the code. Bells and whistles will be awarded for super cool particle systems, proportional to the effort expended.

Resources


Turn-in Information


Please follow the general instructions here. More details below:

Artifact Submission

You will eventually use your program to produce an animated artifact for this project (after the project due date – see the top of the page for artifact due date). Each group should turn in one artifact that they created together. We may give extra credit to those that are exceptionally clever or aesthetically pleasing. Try to use the ideas discussed in the John Lasseter article. These include anticipation, follow-through, squash and stretch, and secondary motion. 

Camera Set-up: To create your animation, you'll need to go to SceneObject->Create Camera to add a render camera to your scene. The view from this camera is what will be rendered out when you save movie frames. Set properties like the Resolution via the right-hand Inspector window, and look through the camera by changing the view from Perspective to your Scene Camera via the scene's menu bar.You can manipulate the position and angle of the camera like any object (and you can also key it in the Animation window!), but not via controlling the view while you are looking through the camera. You may instead find it useful to open a New View and then have both view windows open, one in Perspective and one in your Scene Camera.

Timing: Finally, plan for your animation to be 30 seconds long (60 seconds is the absolute maximum).  We recommend dividing this time into separate shots, and saving a .yaml file for each. The .yaml file contains the spline curves you use for each model property, so you'll want to keep an original scene file to build off of when creating your shots. Plan out your animation - you will find that 30 seconds is a very small amount of time.  We reserve the right to penalize artifacts that go over the time limit and/or clip the video for the purposes of voting.

Exporting: When you're ready, under the File menu of the program, there is a Save Movie Frames option, which will let you specify a base filename for a set of movie frames.  Each frame is saved as a .png, with your base filename plus some digits that indicate the frame number. Use a program like Adobe Premiere or Blender (installed in the labs) to compress the frames into a video file. Refer to this Blender guide for creating the final submission (an H.264 MP4 file). You can use different programs or play with the settings to produce different encodings for your own purposes, but the final submission must be an H.264 MP4.

  • Absolute time limit: 60 seconds...shorter is better!
  • Animation will count toward final grade on animator project.
  • The course staff will grade based on technical and artistic merit.

You must turn in a representative image (snapshot) of your model/scene and your completed artifact (as a video) using the artifact turn-in interface. See due dates/times at the top of this page.  Do not be late!

Important: Compile executable in Release Mode! There is an increase in performance by compiling and creating your artifact in release mode vs debug mode. Also, Save As often, and create multiple shots! There is no undo button.

Bells and Whistles


Bells and whistles are extra extensions that are not required, and will be worth extra credit. You are also encouraged to come up with your own extensions for the project. Run your ideas by the TAs or Instructor, and we'll let you know if you'll be awarded extra credit for them. If you do decide to do something out of the ordinary (that is not listed here), be sure to mention it in a readme.txt when you submit the project.

Shadowmapped shadows are often aliased. To make shadow edges smoother, interpolate shadow values between shadow map texels.

One way to reduce the "shadow acne" artifact is to render the backfaces of objects into the shadow rather than front faces. This will prevent front faces from self-shadowing (except in very thin objects). Implement this for a whistle.

Come up with another whistle and implement it. A whistle is something that extends the use of one of the things you are already doing. It is part of the basic model construction, but extended or cloned and modified in an interesting way. Ask your TAs to make sure this whistle is valid.

Extend your Loop Subdivision scheme to detect and handle (with appropriate weights) boundary edges on non-watertight meshes.

Add a UI to label a vertex as extraordinary or an edge as a crease edge, to be preserved while subdividing a mesh. Extend your subdivision scheme to handle these cases appropriately. Handling both edges and vertices will be worth one whistle plus one bell.

Render a flat mirror in your scene. As you may already know, OpenGL has no built-in reflection capabilities. To simulate a mirror, you'll want to reflect the world about the mirror's plane and then draw the reflected world, before doing the regular scene drawing pass. Use the stencil buffer to make sure that the reflected geometry is clipped inside the boundaries of the mirror. The stencil buffer is similar to a Z buffer and is used to restrict drawing to certain portions of the screen.  See Scott Schaefer's site for more information. In addition, the NeHe game development site has a detailed tutorial.

Build a complex shape as a set of polygonal faces, using triangles (either the provided primitive or straight OpenGL triangles) to render it. Examples of things that don't count as complex: a pentagon, a square, a circle. Examples of what does count: dodecahedron, 2D function plot (z = sin(x2 + y)), etc. Note that using the dodecahedron primitive (or other primitives apart from triangles) does not meet this requirement.

Implement a smooth curve functionality. Examples of smooth curves are here. These curves are a great way to lead into swept surfaces (see below). Functional curves will need to be demonstrated in some way. One great example would be to draw some polynomial across a curve that you define. Students who implement swept surfaces will not be given a bell for smooth curves. That bell will be included in the swept surfaces bell. Smooth curves will be an important part of the animator project, so this will give you a leg up on that.

Implement one or more non-linear transformations applied to a triangle mesh. This entails creating at least one function that is applied across a mesh with specified parameters. For example, you could generate a triangulated sphere and apply a function to a sphere at a specified point that modifies the mesh based on the distance of each point from a given axis or origin. Credit varies depending on the complexity of the transformation(s) and/or whether you provide user controls (e.g., sliders) to modify parameters.

Heightfields are great ways to build complicated looking maps and terrains pretty easily. Implement a heightfield to generate terrain in an interesting way. You might try generating fractals, or loading a heightfield from an image (i.e., allowing the user to design the height of the terrain by painting the image in an image editor and importing it).

Add a lens flare.  This effect has components both in screen space and world space effect.    For full credit, your lens flare should have at least 5 flare "drops", and the transparency of the drops should change depending on how far the light source is from the center of the screen.  You do not have to handle the case where the light source is occluded by other geometry (but this is worth an extra whistle).

x2 Implement shadow-mapping for directional lights. There are many steps to this, but you can refer to the point light shadowmapping code for help:
  1. Look for directional lights in the prerender pass (Scene::RenderPrepass()).
  2. Associate a GLRenderableTexture with each of your directional lights; the equivalent uses of GLRenderableCubemap for point light shadowmaps are in GLRenderer::RenderEnvMaps() and GLRenderer::SetUniforms()
  3. .
  4. Use an orthographic projection of the scene along the direction of the light. This will involve deciding on an orientation of your orthographic projection, and determining the bounds of the scene so that the scene fits into your texture.
  5. Add in the directional light shadowmap as well as any variables defining the orientation as built-in uniforms, and pass them into your shaders as appropriate (bottom of GLRenderer::SetUniforms.
  6. In your Blinn-Phong shader, look up the appropriate ray in the shadowmap to determine shadowing.
x2

Add a function in your model file for drawing a new type of primitive. The following examples will definitely garner two bells; if you come up with your own primitive, you will be awarded one or two bells based on its coolness. Here are three examples:

  • Swept surfaces (this is worth 3 bells) -- given two curves, sweep one profile curve along the path defined by the other. These are also known as "generalized cylinders" when the profile curve is closed. This isn't quite as simple as it may first sound, as it requires the profile curve to change its orientation as it sweeps over the path curve. See this page for some uses of generalized cylinders. This document may be helpful as well, or see the parametric surfaces lecture from a previous offering of this class. You would most likely want to use the same type of curve files as the surface of revolution does. An example would be sweeping a circle along a 2d curve to generate a paper clip.

x2

(Variable) Use some sort of procedural modeling (such as an L-system) to generate all or part of your character. Have parameters of the procedural modeler controllable by the user via control widgets. In a previous quarter, one group generated these awesome results.

x3

Implement projected textures.  Projected textures are used to simulate things like a slide projector, spotlight illumination, or casting shadows onto arbitrary geometry.  Check out this demo and read details of the effect at glBase, and SGI.

x3

Another way to implement real-time shadows is by creating extra geometry in the scene to represent the shadows, based on the silhouettes of objects with respect to light sources. This is called shadow volumes. Shadow volumes can be more accurate than shadow maps, though they can be more resource-intensive, as well. Implement shadow volumes for the objects in your scene. For an extra bell, make it so that shadows work correctly even when your camera is located within a shadow volume.

x3

One difficulty with hierarchical modeling using primitives is the difficulty of building "organic" shapes. It's difficult, for instance, to make a convincing looking human arm because you can't really show the bending of the skin and bulging of the muscle using cylinders and spheres. There has, however, been success in building organic shapes using metaballs. Implement your hierarchical model and "skin" it with metaballs. Hint: look up "marching cubes" and "marching tetrahedra" --these are two commonly used algorithms for volume rendering. For an additional bell, the placement of the metaballs should depend on some sort of interactically controllable hierarchy. Try out a demo application.

Metaball Demos: These demos show the use of metaballs within the modeler framework. The first demo allows you to play around with three metaballs just to see how they interact with one another. The second demo shows an application of metaballs to create a twisting snake-like tube. Both these demos were created using the metaball implementation from a past CSE 457 student's project.

Demo 1: Basic Texture Mapped Metaballs
Demo 2: Cool Metaball Snake

Come up with another whistle and implement it.  A whistle is something that extends the use of one of the things you are already doing. It is part of the basic model construction, but extended or cloned and modified in an interesting way. Ask your TAs to make sure this whistle is valid.

Enhance the required spline options. Some of these will require alterations to the user interface, which involves learning Qt and the UI framework.  If you want to access mouse events in the graph window, look at the CurvesPlot function in the GraphWidget class.  Also, look at the Curve class to see what control point manipulation functions are already provided.  These could be helpful, and will likely give you a better understanding of how to modify or extend your program's behavior.  A maximum of 3 whistles will be given out in this category.

Let the user control the tension of the Catmull-Rom spline.

Implement one of the standard subdivision curves (e.g., Lane-Riesenfeld or Dyn-Levin-Gregory).

Add options to the user interface to enforce C1 or C2 continuity between adjacent Bezier curve segments automatically. (It should also be possible to override this feature in cases where you don't want this type of continuity.)

Add the ability to add a new control point to any curve type without changing the curve at all.

The linear curve code provided in the skeleton can be "wrapped," which means that the curve has C0 continuity between the end of the animation and the beginning. As a result, looping the animation does not result in abrupt jumps. You will be given a whistle for each (nonlinear) curve that you wrap.

Modify your particle system so that the particles' velocities get initialized with the velocity of the hierarchy component from which they are emitted. The particles may still have their own inherent initial velocity. For example, if your model is a helicopter with a cannon launching packages out if it, each package's velocity will need to be initialized to the sum of the helicopter's velocity and the velocity imparted by the cannon.

Particles rendered as points or spheres may not look that realistic.  You can achieve more spectacular effects with a simple technique called billboarding.  A billboarded quad (aka "sprite") is a textured square that always faces the camera.  See the sprites demo.  For full credit, you should load a texture with transparency (sample textures), and turn on alpha blending (see this tutorial for more information).  Hint:  When rotating your particles to face the camera, it's helpful to know the camera's up and right vectors in world-coordinates.

Use the billboarded quads you implemented above to render the following effects.  Each of these effects is worth one whistle provided you have put in a whistle worth of effort making the effect look good.

Fire (example) (You'll probably want to use additive blending for your particles -glBlendFunc(GL_SRC_ALPHA,GL_ONE);)

Snow (example)

Water fountain (example)

Fireworks (example)

Add baking to your particle system.  For simulations that are expensive to process, some systems allow you to cache the results of a simulation.  This is called "baking."  After simulating once, the cached simulation can then be played back without having to recompute the particle properties at each time step.  See this page for more information on how to implement particle baking.    

Implement a motion blur effect (example).  The easy way to implement motion blur is using an accumulation buffer - however, consumer grade graphics cards do not implement an accumulation buffer.  You'll need to simulate an accumulation buffer by rendering individual frames to a texture, then combining those textures.  See this tutorial for an example of rendering to a texture.

Euler's method is a very simple technique for solving the system of differential equations that defines particle motion.  However, more powerful methods can be used to get better, more accurate results.  Implement your simulation engine using a higher-order method such as the Runge-Kutta technique.  ( Numerical Recipes, Sections 16.0, 16.1) has a description of Runge-Kutta and pseudo-code.

Implement adaptive Bezier curve generation: Use a recursive, divide-and-conquer, de Casteljau algorithm to produce Bézier curves, rather than just sampling them at some arbitrary interval. You are required to provide some way to change the flatness parameter and maximum recursion depth, with a keystroke or mouse click.  In addition, you should have some way of showing (a debug print statement is fine) the number of points generated for a curve to demonstrate your adaptive algorithm at work. 

To get an extra whistle, provide visual controls in the UI (i.e. sliders) to modify the flatness parameter and maximum recursion depth, and also display the number of points generated for each curve in the UI.

Extend the particle system to handle springs. For example, a pony tail can be simulated with a simple spring system where one spring endpoint is attached to the character's head, while the others are floating in space.  In the case of springs, the force acting on the particle is calculated at every step, and it depends on the distance between the two endpoints.  For one more bell, implement spring-based cloth.  For 2 more bells, implement spring-based fur.  The fur must respond to collisions with other geometry and interact with at least two forces like wind and gravity.

Allow for particles to bounce off each other by detecting collisions when updating their positions and velocities.  Although it is difficult to make this very robust, your system should behave reasonably.

Implement the "Hitchcock Effect" described in class, where the camera zooms in on an object, whilst at the same time pulling away from it (the effect can also be reversed--zoom out and pull in). The transformation should fix one plane in the scene--show this plane. Make sure that the effect is dramatic--adding an interesting background will help, otherwise it can be really difficult to tell if it's being done correctly.

Implement a "general" subdivision curve, so the user can specify an arbitrary averaging mask  You will receive still more credit if you can generate, display, and apply the evaluation masks as well.  There's a site at Caltech with a few interesting applets that may be useful.

Perform collision detection with more complicated shapes.  For complex scenes, you can even use the accelerated ray tracer and ray casting to determine if a collision is going to occur.  Credit will vary with the complexity shapes and the sophistication of the scheme used for collision detection.

If you find something you don't like about the interface, or something you think you could do better, change it!  Any really good changes will be incorporated into the next Animator.  Credit varies with the quality of the improvement.

x2

If you implemented billboarded quads with transparent textures, you may notice issues with the alpha blending. OpenGL renders objects out in a certain order and for proper alpha blending, you must render out all opaque objects first before you draw any objects that have transparency, which should then be drawn from farthest to closest to the camera. Do this with your billboards for the additional two bells.

x2

Currently, the code makes a draw call for each particle in the system, which is really expensive. A more performant way of doing this is using "instanced rendering", which is rendering a single mesh more than one time using a single draw call. Draw Particle Systems using Instanced Rendering in glrenderer.cpp to see much better performance.

x2

Add flocking behaviors to your particles to simulate creatures moving in flocks, herds, or schools.  A convincing way of doing this is called "boids"  (see here for a short flocking guide made by 457 staff, and here for a demo and for more information).  For full credit, use a model for your creatures that makes it easy to see their direction and orientation (as a minimal example, you could show this with colored pyramids, oriented towards the direction in which the creatures are pointing).  For up to one more bell, make a realistic creature model and have it move realistically according to its motion path.  For example, a bird model would flap its wings to gain speed and rise in the air, and hold its wings outstretched when turning or gliding.

x2

Implement a C2-Interpolating curve.  You'll need to add it to the drop-down selection. See this handout.

x2

Add the ability to edit Catmull-Rom curves using the two "inner" Bezier control points as "handles" on the interpolated "outer" Catmull-Rom control points. After the user tugs on handles, the curve may no longer be Catmull-Rom.  In other words, the user is really drawing a C1 continuous curve that starts off with the Catmull-Rom choice for the inner Bezier points, but can then be edited by selecting and editing the handles.  The user should be allowed to drag the interpolated point in a manner that causes the inner Bezier points to be dragged along.  See PowerPoint and Illustrator pencil-drawn curves for an example.

x3

An alternative way to do animations is to transform an already existing animation by way of motion warping (animations). Extend the animator to support this type of motion editing.

x4

Incorporate rigid-body simulations into your program, so that you can correctly simulate collisions and response between rigid objects in your scene.  You should be able to specify a set of objects in your model to be included in the simulation, and the user should have the ability to enable and disable the simulation either using the existing "Simulate" button, or with a new button.   

Monster Bells


Disclaimer: please consult the course staff before spending any serious time on these. They are quite difficult, and credit can vary depending on the quality of your method and implementation.

Inverse kinematics

The hierarchical model that you created is controlled by forward kinematics; that is, the positions of the parts vary as a function of joint angles. More mathematically stated, the positions of the joints are computed as a function of the degrees of freedom (these DOFs are most often rotations). The problem of inverse kinematics is to determine the DOFs of a model to satisfy a set of positional constraints, subject to the DOF constraints of the model (a knee on a human model, for instance, should not bend backwards).

This is a significantly harder problem than forward kinematics. Aside from the complicated math involved, many inverse kinematics problems do not have unique solutions. Imagine a human model, with the feet constrained to the ground. Now we wish to place the hand, say, about five feet off the ground. We need to figure out the value of every joint angle in the body to achieve the desired pose. Clearly, there are an infinite number of solutions. Which one is "best"?

Now imagine that we wish to place the hand 15 feet off the ground. It's fairly unlikely that a realistic human model can do this with its feet still planted on the ground. But inverse kinematics must provide a good solution anyway. How is a good solution defined?

Your solver should be fully general and not rely on your specific model (although you can assume that the degrees of freedom are all rotational). Additionally, you should modify your user interface to allow interactive control of your model though the inverse kinematics solver. The solver should run quickly enough to respond to mouse movement.

If you're interested in implementing this, you will probably want to consult the CSE558 lecture notes.

View-dependent adaptive polygon meshes

The primitives that you are using in your model are all built from simple two dimensional polygons. That's how most everything is handled in the OpenGL graphics world. Everything ends up getting reduced to triangles.

Building a highly detailed polygonal model often requires millions of triangles. This can be a huge burden on the graphics hardware. One approach to alleviating this problem is to draw the model using varying levels of detail. In the modeler application, this can be done by specifying the quality (poor, low, medium, high). This unfortunately is a fairly hacky solution to a more general problem.

First, implement a method for controlling the level of detail of an arbitrary polygonal model. You will probably want to devise some way of representing the model in a file. Ideally, you should not need to load the entire file into memory if you're drawing a low-detail representation.

Now the question arises: how much detail do we need to make a visually nice image? This depends on a lot of factors. Farther objects can be drawn with fewer polygons, since they're smaller on screen. See Hugues Hoppe's work on View-dependent refinement of progressive meshes for some cool demos of this. Implement this or a similar method, making sure that your user interface supplies enough information to demonstrate the benefits of using your method. There are many other criteria to consider that you may want to use, such as lighting and shading (dark objects require less detail than light ones; objects with matte finishes require less detail than shiny objects).

Hierarchical models from polygon meshes

Many 3D models come in the form of static polygon meshes. That is, all the geometry is there, but there is no inherent hierarchy. These models may come from various sources, for instance 3D scans. Implement a system to easily give the model some sort of hierarchical structure. This may be through the user interface, or perhaps by fitting an model with a known hierarchical structure to the polygon mesh (see this for one way you might do this). If you choose to have a manual user interface, it should be very intuitive.

Through your implementation, you should be able to specify how the deformations at the joints should be done. On a model of a human, for instance, a bending elbow should result in the appropriate deformation of the mesh around the elbow (and, if you're really ambitious, some bulging in the biceps).

Inverse kinematics

The hierarchical model that you created is controlled by forward kinematics; that is, the positions of the parts vary as a function of joint angles. More mathematically stated, the positions of the joints are computed as a function of the degrees of freedom (these DOFs are most often rotations). The problem is inverse kinematics is to determine the DOFs of a model to satisfy a set of positional constraints, subject to the DOF constraints of the model (a knee on a human model, for instance, should not bend backwards).

This is a significantly harder problem than forward kinematics. Aside from the complicated math involved, many inverse kinematics problems do unique solutions. Imagine a human model, with the feet constrained to the ground. Now we wish to place the hand, say, about five feet off the ground. We need to figure out the value of every joint angle in the body to achieve the desired pose. Clearly, there are an infinite number of solutions. Which one is "best"?

Now imagine that we wish to place the hand 15 feet off the ground. It's fairly unlikely that a realistic human model can do this with its feet still planted on the ground. But inverse kinematics must provide a good solution anyway. How is a good solution defined?

Your solver should be fully general and not rely on your specific model (although you can assume that the degrees of freedom are all rotational). Additionally, you should modify your user interface to allow interactive control of your model though the inverse kinematics solver. The solver should run quickly enough to respond to mouse movement.

If you're interested in implementing this, you will probably want to consult the CSE558 lecture notes.

Interactive Control of Physically-Based Animation

Create a character whose physics can be controlled by moving a mouse or pressing keys on the keyboard.  For example, moving the mouse up or down may make the knees bend or extend the knees (so your character can jump), while moving it the left or right could control the waist angle (so your character can lean forward or backward).  Rather than have these controls change joint angles directly, as was done in the modeler project, the controls should create torques on the joints so that the character moves in very realistic ways.  This monster bell requires components of the rigid body simulation extension above, but you will receive credit for both extensions as long as both are fully implemented..  For this extension, you will create a hierarchical character composed of several rigid bodies.   Next, devise a way user interactively control your character.  

This technique can produce some organic looking movements that are a lot of fun to control.  For example, you could create a little Luxo Jr. that hops around and kicks a ball.  Or, you could create a downhill skier that can jump over gaps and perform backflips (see the Ski Stunt example below).

SIGGRAPH paper - http://www.dgp.toronto.edu/~jflaszlo/papers/sig2000.pdf

Several movie examples - http://www.dgp.toronto.edu/~jflaszlo/interactive-control.html

Ski Stunt - a fun game that implements this monster bell - Information and Java applet demo - Complete Game (win32)

If you want, you can do it in 2D, like the examples shown in this paper (in this case you will get full monster bell credit, but half credit for the rigid body component).