Assigned: Wednesday, October 29th
Due: Monday, November 10th by 11:59pm
Artifact Due: Monday, November 17th by 11:59pm
To get going, you need to get the skeleton source code. This is distributed via SVN, which is all set up for you. In the labs, we will be using TortoiseSVN. In order to get the source code, follow the directions below:
Graphics Lab:
file://\\ntdfs/cs/unix/projects/instr//cse557/animator/YOUR_ASSIGNED_GROUP_NAME/
Remote Access:
svn+ssh://YOUR_CSE_NET_ID@attu.cs.washington.edu/projects/instr//cse557/animator/YOUR_ASSIGNED_GROUP_NAME/
If you plan to work from home, you will either need to use the FLTK Installer or point Visual Studio to the correct include and library directories for the FLTK that came in your repository (since normally it points to the local one on each lab machine). Instructions for how to do this. To open and build the project, double-click modeler.vcxproj.
Note: YOUR_ASSIGNED_GROUP_NAME is the combination of the CSE login ids of the group memebers. For example, say Prof. Brian Curless (cseid: curless) and his TA Aditya (cseid: aditya) are working on the project together, YOUR_ASSIGNED_GROUP_NAME is curless-aditya; and the repository will be at .../projects/instr//cse557/trace/curless-aditya/. If Brian is going to work solo, YOUR_ASSIGNED_GROUP_NAME is just curless.
Here is a summary of the requirements for this project:
Some of these requirements are explained in greater detail below.
In OpenGL, all scenes are made of primitives like points, lines, triangles and quadrilaterals. In this project, you will implement a surface of revolution using glDrawElements with GL_TRIANGLES to make a 3D surfaces. For each mesh, you send a list of vertices (the points that make up the primitive) with normals and texture coordinates, and an array indices specifying triangles. A simple example can be found in the help slides.
You can read more about OpenGL here.
Write code to create a 3D surface in the drawRevolution() method in modelerdraw.cpp. Your shape must:
Texture mapping allows you to "wrap" images around your model by mapping points on an image (called a texture) to vertices on your model. For each vertex, you indicate the coordinate that vertex should apply to, using the command glTexCoord2f(s, t), where s is the X-coordinate and t is the Y-coordinate of the point on the texture that should line up with the vertex. You call glTexCoord2f() right before the vertex you want to apply it to:
texture.use(); glBegin(GL_TRIANGLES); ... glTexCoord2f(0, 0); glVertex3f(0, 0, 0); ... glEnd(); glBindTexture(GL_TEXTURE_2D, 0);
When you want to use a texture, you'll need to do the following:
class MyModel : public Model { Texture2D checkers; public: // Constructor for MyModel MyModel() : checkers("checkers.png") { } // Your load method void load() { checkers.load(); } // Your draw method void draw() { // all geometry after this point will have the checkers texture checkers.use(); } };
If you want to get the texture's ID for more advanced stuff:
GLuint textureID = texture.getID();
If you want to use another texture, just call that texture object's use() method. To stop using textures, call:
glBindTexture(GL_TEXTURE_2D, 0);
Surface normals are perpendicular to the plane that's tangent to a surface at a given vertex. Surface normals are used for lighting calculations, because they help determine how light reflects off of a surface. Each normal is sent by calling glNormal3f(x, y, z) before the call to glVertex3f(), much like texture mapping.
In OpenGL, we often want to approximate smooth shapes like spheres and cylinders using only triangles and quadrilaterals. One way to make the lighting look smooth is to use the normals from the shape we're trying to approximate, rather than just making them perpendicular to the polygons we draw. This means we calculate the normals for each vertex (per-vertex normals), rather than each face (per-face) normals. Shaders allow us to get even smoother lighting, calculating the normals at each pixel. You can compare these methods below:
Per-face | Per-vertex | Per-pixel |
A hierarchical model is a way of grouping together shapes and attributes to form a complex object. Parts of the object are positioned relative to each other instead of the world. Think of each object as a tree, with nodes decreasing in complexity as you move from root to leaf. Each node can be treated as a single object, so that when you modify a node you end up modifying all its children together. Hierarchical modeling is a very common way to structure 3D scenes and objects, and is found in many other contexts. If hierarchical modeling is unclear to you, you are encouraged to look at the Hierarchical Modeling lecture notes.
You must come up with a character that meets these requirements:
draw geometry push matrix translate up draw geometry push matrix translate left draw geometry pop matrix push matrix translate right draw geometry pop matrix pop matrix push matrix translate down draw geometry pop matrix
You will want to make a model to draw your character, by extending the Model class in model.h:
class MyModel : public Model { public: // Constructor for MyModel MyModel() { } // Draws the model onscreen void draw() { // ... call your drawing functions here ... } };
The easiest way to get started is to just modify the existing Scene class in sample.cpp. It extends Model, and in the main() method at the bottom, an instance of it is given to ModelerUserInterface to be displayed onscreen. Later on, you can create your own Model subclass and add it to the Scene class, much like the PointLight and DirectionalLight Models are added. See sample.cpp for details.
A property is an aspect of your model that the user can control. Modeler represents different types of properties with Property classes in model.h:
|
RangeProperty stores a single number that's within a continuous range. |
|
RGBProperty stores a color's red, green, and blue components. Modeler lets you set it with a color picker. |
| BooleanProperty is either true or false. Modeler lets you set it with a checkbox, making it useful for showing or hiding parts of your model. |
| ChoiceProperty has an integer value from 0 to N - 1, where N is the number of choices it has. You can assign names to each choice, and Modeler will show the names in the control panel. |
To add a property to your model class, do the following:
class MyModel : public Model {Make sure protected: comes before properties, unless you want to be able to access them from other classes (in that case, use public:).protected:Add the property here:RangeProperty height; public: MyModel() :put constructor here, after the above colonheight("Height", 0.0f, 10.0f, 0.0f, 0.1f) {Add the property to the model's GroupProperty, which is a group of properties that's a property of every Model.properties.add(&height); } void draw() {Use your new property's value for something:glScalef(0, height.getValue(), 0); } };
Now, when you open Modeler, a control for your property appears on the left panel.
For further details on each property class, look in model.h. To see how they are added to models, look at the skeleton code's sample model in sample.cpp.
WARNING: If you try to set the value of a property of your model by calling setValue(), setRed(), etc., the slider, color picker, or checkbox corresponding to that property will not be updated to show that property's new value. This problem is an oversight in the design of the new Modeler, but will not prevent you from completing the project requirements. If you find yourself needing to set your model's properties, consult your TA's, because you might not be using the Model class the way it was intended to be used.
The graphics primitives you have (box, sphere, and cylinder) only accept parameters that define their dimensions. That means you can draw a box, sphere, or cylinder of any size, but it will always be at the origin with the same orientation. What if you want to move it somewhere else, or rotate it to a different orientation -- and you can't edit the primitive's drawing code?
In OpenGL, every vertex is multipled by the modelview matrix, then by the projection matrix:
Since Modeler takes care of projection, you'll modify the modelview matrix. OpenGL has several functions that create matrices for common transformations, then multiply them by the modelview matrix:
If you called glTranslatef(1, 0, 0), then called glutSolidSphere(1, divisions, divisions), the sphere would be drawn centered around (1, 0, 0) because all of its vertices had 1 added to their X-coordinates. Each transformation has the effect of creating a "model space" with its own origin and axes. The sphere was drawn at the origin of a model space created by glTranslatef(1, 0, 0).
Once you have applied a transformation to the modelview matrix, it will be applied to EVERY point sent to the graphics card forever! To undo the transformation, you need to remember the matrix's contents.
Make sure you match each glPushMatrix() with a corresponding glPopMatrix(), so that your modelview matrix is returned to its original state. This makes it easy to "nest" your transformations:
// Here, modelview matrix is in world space glPushMatrix(); // save world space matrix // Still in world space glTranslate(1, 0, 0); // Now in model space (everything translated left by 1). glutSolidSphere(1, divisions, divisions); // Here's a "nested" transformation. After the corresponding glPopMatrix(), // we'll be back in model space. glPushMatrix(); // save model space matrix // Still in model space glTranslate(3, 0, 0) // Now in "model space 2" glutSolidSphere(1, divisions, divisions); glPopMatrix(); // copy model space matrix into modelview matrix // Back in model space glPopMatrix(); // copy world space matrix into modelview matrix // Back in world space
A shader is a program that controls the behavior of a piece of the graphics pipeline on your graphics card.
Shaders determine how the scene lighting affects the coloring of 3D surfaces. In OpenGL, there are two basic kinds of lights:
A shading model determines how the final color of a surface is calculated from a scene's light sources and the object's material. We have provided a shader that uses the Blinn-Phong shading model for scenes with directional lights. See lecture notes for details on the Blinn-Phong shading model.
Add support for the scene's point light, by editing the files shader.vert and shader.frag. You need to include quadratic distance attenuation. Please open these files and read the comments for details.
We have provided the sample solution modeler_solution.exe in your repository, which lets you compare your shader to the (correct) sample solution shader. To choose which shader to show, click on "Scene" and click the radio button for the shader you want to use. The sample solution also contains both a point light and a directional light for you to manipulate. Click the "Point Light" or "Directional Light" on the list in the top left corner to edit a light's properties or move it around. The difference feature in the sample solution will compare your shader to the solution shader for you. To use it, select "Difference" under "Shader To Use." Any differences between your shader and the solution shader will become highlighted in the display window. Differences are highlighed by color channel, so if the red is different, then you'll see a bright red pixel, if the green is different, you'll see a bright green pixel, etc. If there are no differences between the two shaders, the entire image should appear black. If there are a few thin highlights along the edges, that is ok since it is probably due to differences in rounding and you will not be counted off for it in grading.
Ashikhmin Anisotropic Shader - This shader uses the Ashikhmin-Shirley lighting model to produce an anisotropic reflection effect. To see the effect, try setting either the Anisotropic X or Anisotropic Y slider close to 0 (0.1), and scale up the other one to see highlighting in different tangent vector directions, then move the camera around the object.
Ward Anisotropic Shader - This shader uses the Ward lighting model to produce an anisotropic reflection effect. Again, to see the effect, try setting either the Anisotropic X or Anisotropic Y slider close to 0 (0.1), and scale up the other one to see highlighting in different tangent vector directions, then move the camera around the object. Additionally, manipulating the "shininess" factor will affect this shader. Manipulate the In(Specular Exponent) slider to see this change.
Schlick Shader - This shader uses the Schlick approximation to the Fresnel term of the specular reflection. The Schlick N1 and Schlick N2 sliders control factors that simulate indices of refraction, with 1 being the index of refraction of a vacuum, and very close to that of air. To see a change in specular highlighting, try setting one of the sliders near 1 and scaling up the other. When the sliders are roughly equal to each other, there will not be much of a highlight (i.e. light doesn't refract much when passing from air to air).
Diffraction Shader - This shader implements a diffraction effect. To see it in action, select a Diffraction Color (convenience of labeling - it is a color that will permeate the object similar to diffuse or specular), then spin the camera around the object to see the diffraction effect.
Create another shader(s) that is worth at least 1.5 bell (or 3 whistles). Any additional bells or whistles will be extra credit. You must keep your point light Blinn-Phong shader from Part 4 separate, so we can grade it separately, and you must exhibit this shader in your Modeler binary. Consult the OpenGL orange book for some excellent tips on shaders. Ask your TA's if you would like to implement a shader that isn't listed below. Credit for any shader may be adjusted depending on the quality of the implementation, but any reasonable effort should at least earn you the required whistle.
Spot Light Shader - Create a shader that supports a spot light source, and add a third light source to your Modeler. We should be able to adjust the spot light parameters via sliders.
Cartoon Shader - Create a shader that produces a cartoon effect by drawing a limited set of colors and a simple darkening of sillouettes for curved objects based on normal and viewing direction at a pixel. This per-pixel silhouette-darkening approach will work well in some cases around curved surfaces, but not all. For more extra credit, get it to work well for all shapes and with a fixed silhouette thickness.
Schlick Shader - Create a shader, and sliders to control it, that uses the Schlick approximation for refraction.
Blinn-Phong and Mipmapped Texture Mapping Shader - Extend your Blinn-Phong shader to support Texture Mapping and (optionally) controllabe mipmap levels. If mip-mapping, you should have some UI components that can clearly control the mipmap level. Remember to keep a copy of your original Blinn-Phong shader for grading. Note: Texture mapping earns a bell of credit, an additional whistle is awarded for mip-mapping.
Tessellated Procedural Shader - Make a shader that produces an interesting, repeating pattern, such as a brick pattern, without using a texture.
Bump Mapping Shader - This shader uses a texture to perturb the surface normals of a surface to create the illusion of tiny bumps, without introducing additional geometry.
Diffraction Shader - Create a shader that produces a diffraction effect when you move around the object.
Anisotropic Shader - Create a shader that produces anisotropic specular highlighting, creating a shiny metal appearance. Additionally, add sliders to control the magnitude in 2 perpendicular directions.
Environment Mapped Shader - To make an object appear really shiny (i.e. metallic), it needs to reflect the objects around it. One way to do this is to take a panoramic picture of the surroundings, store it in a texture, and use that texture to determine what should be reflected. For simplicity, we recommend obtaining an existing environment map from somewhere (perhaps making it yourself with a 3D raytracer).
Cloud / Noise Shader - Create a shader that uses noise functions (like Perlin noise) to generate clouds. You may not use textures for this shader. Credit depends on the realism of the clouds.
You can use the sample solution Modeler to develop some of these shaders, but others require texture maps to be provided -- which the sample solution may not provide to your shader.
Shader files are loaded, compiled, and linked by ShaderProgram objects. If you want to add a shader:
The ShaderProgram constructor takes three filenames:
class MyModel : public Model { public: Texture2D texture; ShaderProgram shader; // Your model constructor MyModel() : // after this colon go the constructors for your shaders, // textures, and properties: texture("checkers.png"), shader("shader.vert", "shader.frag") {} };
If you have an error in your shader code, YOU DON'T HAVE TO RESTART MODELER! Instead, fix your shader, then go to File->Reload Textures And Shaders.
Now, if you made a ShaderProgram field called shader, you could use that shader in your draw() method like this:
shader.use();
This shader will be applied to all subsequent geometry.
If you want to use another shader, just call that shader object's use() method. To stop using shaders, call:
glUseProgram(0);
If you want to get the shader program's ID for more advanced stuff:
GLuint shaderProgramID = shader.getID();
You will need to implement each of the following curve types:
Bezier Curves
When we say "cubic beziers splined together with C0 continuity" it means that you'll need at least four control points to make a single bezier curve. Adjacent Bezier curves share control points so that the last control point of one Bezier curve will be the first control point of another. In this way you can have two complete Bezier curves with only 7 control points. Note: In the lecture slides, you were shown an adaptive recurrsive algorithm for creating bezier curves as well as a straight forward method that simply samples at a consistent rate. The adaptive Bezier curve generation is not required (but is a bell of extra credit). Feel free to sample the curve at a constant rate to fulfill the project requirments.
General Curves
It is possible to make parametric curves that "double back" on themselves (x is not monotonically increasing as a function of t). It must be possible to interpret the curves that your solution produces as a function of time, so you'll have to think about and solve this case.
The GraphWidget object owns a bunch of Curve objects. The Curve class is used to represent the time-varying splines associated with your model parameters. You don't need to worry about most of the existing code, which is used to handle the user interface. However, it is important that you understand the curve evaluation model. Each curve is represented by a vector of evaluated points.
mutable std::vector
mutable std::vector
The user of your program can manipulate the positions of the control points using the Graph Widget interface. Your code will compute the value of the curve at intervals in time, determining the shape of the curve. Given a set of control points, the system figures out what the evaluated points are.
This conversion process is handled by the CurveEvaluator member variable of each curve.
const CurveEvaluator* m_pceEvaluator;
In the skeleton, only the LinearCurveEvaluator has been implemented. Consequently, the curve drawn is composed of line segments directly connecting each control point. You should use the LinearCurveEvaluator as a model to implement the other required curve evaluators: Bezier, B-Spline, and Catmull-Rom. C2-Interpolating curves can be added for extra credit.
For each curve type, you must write a new class that inherits from CurveEvaluator. Inside the class, you should implement the evaluateCurve function. This function takes the following parameters:
ptvCtrlPts
--a collection of control points that you specify in the curve editor
ptvEvaluatedCurvePts
--a collection of evaluated curve points that you return from the function calculated using the curve type's formulas
fAniLength
--the maximum time that a curve is defined
bWrap
--a flag indicating whether or not the curve should be wrapped (wrapping can be implemented for extra credit)
To add a new curve
type, you should look in the GraphWidget constructor and change the
following lines to use your new set of evaluator classes.
m_ppceCurveEvaluators[CURVE_TYPE_BSPLINE]
= new LinearCurveEvaluator();
m_ppceCurveEvaluators[CURVE_TYPE_BEZIER] = new LinearCurveEvaluator();
m_ppceCurveEvaluators[CURVE_TYPE_CATMULLROM] = new LinearCurveEvaluator();
For Bezier curves (and the splines based on them), it is sufficient to sample the curve at fixed intervals of time. The adaptive de Casteljau subdivision algorithm presented in class may be implemented for an extra bell.
Catmull-Rom and B-spline curves should be endpoint interpolating. This can be done by doubling the endpoints for Catmull-Rom and tripling them for B-spline curves.
You do not have to sort the control points or the evaluated curve points. This has been done for you. Note, however, that for an interpolating curve (Catmull-Rom), the fact that the control points are given to you sorted by x does not ensure that the curve itself will also monotonically increase in x. You should recognize and handle this case appropriately. One solution is to return only the evaluated points that are increasing monotonically in x.
Also, be aware that the evaluation function will linearly interpolate between the evaluated points to ensure a continuous curve on the screen. This is why you don't have to generate infinitely many evaluated points.
Two Distinct Forces
Create at least two distinct types of forces that act on your particle
system. The three most obvious distinct forces are gravity (f=mg),
viscous drag (f=-k_d*v), and Hooks spring law. Other interesting
possibilities include electromagnetic force, simulation of flocking
behavior, and buoyant force. If the forces you choose are complicated or
novel (or listed in the Bells and Whistles) you may earn extra credit
while simultaneously fulfilling this requirement.
Collision Detection & Response
Perform collision detection with your particles and the sphere in the default scene (or a sphere you create) and at least one additional primitive of your choice in your scene. A natural choice for the additional chosen primitive is the ground plane of your scene. Your particles should bounce off of the sphere and that primitive, and you should provide a control for the restitution constant that determines how much the normal component of the reflected velocity is attenuated. Note that the sphere should be able to be moved and the particle collisions should collide with the sphere's current position, not just its original position. The sphere collision should be "natural" - i.e. the particles colliding with the sphere should reflect off the sphere dependent on the sphere's normal at the point of collision and the particle's incoming velocity direction.
To control the restitution, you can add a RangeProperty field to your ParticleSystem class, just like you did for your model class in Modeler. The difference is that you will add the field to your ParticleSystem class, then add it to your model's property list in your model's constructor. This way, you can easily access your field both from inside your ParticleSystem and inside your model!
#include "properties.h"
to particleSystem.h, before class ParticleSystem
. This lets you use sliders in ParticleSystem.
// in particleSystem.h
#include "properties.h" // you added this, right?
class ParticleSystem {
public:
RangeProperty restitution;
};
// in particleSystem.cpp
ParticleSystem::ParticleSystem() : restitution("Restitution", 0.0f, 2.0f, 1.0f, 0.1f) {
// ... constructor code
}
// in sample.cpp (or whatever file your model is in):
class MyModel : public Model {
ParticleSystem ps;
// ... other stuff ....
MyModel() : // ... constructor calls for your sliders, textures, shader, etc ...
{
// ... calls to properties.add() for each slider, checkbox, etc. ...
properties.add(&ps.restitution);
}
}
restitution.getValue()
.
The skeleton code has a very high-level framework in place for running particle simulations that is based on Witkin's Particle System Dynamics. In this model, there are three major components:
You are responsible for coming up with a representation for particles and forces. The skeleton provides a very basic outline of a simulation engine, encapsulated by the ParticleSystem class. Currently, the header file (ParticleSystem.h) specifies an interface that must be supported in order for your particle system to interact correctly with the animator UI. Alternately, you can try to figure out how the UI works yourself by searching within the project files for all calls to the particle system's functions, and then re-organizing the code. This second option may provide you with more flexibility in doing some very ambitious particle systems with extra UI support. However, the framework seems general enough to support a wide range of particle systems. There is detailed documentation in the header file itself that indicates what each function you are required to write should do. Note that the ParticleSystem declaration is by no means complete. As mentioned above, you will have to figure out how you want to store and organize particles and forces, and as a result, you will need to add member variables and functions.
One of the functions you are required to implement is called computeForcesAndUpdateParticles:
virtual void computeForcesAndUpdateParticles(float t);
This function represents the meat of the simulation solver. Here you will compute the forces acting on each particle and update their positions and velocities based on these forces using Euler's method. As mentioned above, you are responsible for modeling particles and forces in some way that allows you to perform this update step at each frame.
One requirement of your particle system is to attach it to a node of your model other than the root. This requires that you think carefully about about how to represent the positions of your particles.
Suppose you want to attach a particle shower to your model's hand. When you apply the force of gravity to these particles, the direction of the force will always be along the negative Y axis of the world. If you mistakenly apply gravity along negative Y of the hand's coordinate space, you'll see some funky gravity that depends on the orientation of the hand (bad!). To solve this problem, we recommend that you attach a particle emitter to the model's hand, but store all the particles positions as coordinates in world space. This means that you'll need to calculate the world coordinates of the particle emitter every time a particle is spawned.
Please read the following pseudocode, which contains an in-depth discussion of using particles in your hierarchy.
The function getModelViewMatrix
is used in the file above. We are also providing the C implementation for
it:
Mat4f getModelViewMatrix()
{
GLfloat m[16];
glGetFloatv(GL_MODELVIEW_MATRIX, m);
Mat4f matMV(m[0], m[1], m[2], m[3],
m[4], m[5], m[6], m[7],
m[8], m[9], m[10], m[11],
m[12], m[13], m[14], m[15] );
return matMV.transpose(); // because the matrix GL
returns is column major
}
Animator obtains your model's particle system by calling the getParticleSystem() method of your Model subclass. If you don't override it, this method returns NULL (and Animator may crash as a result). So, to add your particle system, do the following:
protected: ParticleSystem ps;
public: ParticleSystem* getParticleSystem() { return &ps; }
After selecting a model parameters in the tree on the left, the parameter's corresponding animation curve is displayed in the graph. Each spline is evaluated as a simple piece-wise linear curve that linearly interpolates between control points. You can manipulate the curve as follows:
Command |
Action |
LEFT MOUSE | Clicking anywhere in the graph creates a control point for the selected curve. Control points can be moved by clicking on them and dragging. |
SHIFT LEFT MOUSE | Removes a control point |
ALT LEFT MOUSE | Rubber-band selection of control points |
RIGHT MOUSE | Zooms in X and Y dimensions |
CTRL RIGHT MOUSE | Zooms into the rubber-banded space region |
SHIFT RIGHT MOUSE | Pans the viewed region |
Note that each of the displayed curves has a different scale. Based on the maximum and minimum values for each parameter that you specified in your model file, the curve is drawn to "fit" into the graph. You'll also notice that the other curve types in the drop-down menu are not working. One part of your requirements (outlined below) is to implement these other curves.
At the bottom of the window is a simple set of VCR-style controls and a time slider that let you play, pause, and seek in your animation. The The Simulate checkbox relates to the particle system which is discussed below.
Camera motions can be edited in two ways:
You will eventually use your program to produce an animated artifact for this project (after the project due date – see the top of the page for artifact due date). Under the File menu of the program, there is a Save Movie Frames option, that will let you specify a base filename for a set of movie frames. Each frame is saved as a png or jpg, with your base filename plus some digits that indicate the frame number. Use a program like Adobe Premiere (installed in the labs) to compress the frame into a video file. (See Quick Links for more detail.)
Each group should turn in their own artifact. We may give extra credit to those that are exceptionally clever or aesthetically pleasing. Try to use the ideas discussed in the John Lasseter article. These include anticipation, follow-through, squash and stretch, and secondary motion.
Finally, plan for your animation to be 30 seconds long (60 seconds is the absolute maximum). You will find this is a very small amount of time, so consider this when planning your animation. We reserve the right to penalize artifacts that go over the time limit and/or clip the video for the purposes of voting. Refer to this guide for creating the final submission (an H.264 MP4 file). You can play with settings to produce different encodings for your own purposes, but the final submission must be an H.264 MP4.
Then, you must turn in your completed artifact (as a video and a snapshot) by copying it into your turnin/artifact directory, and uploading it to the Catalyst.
See due dates/times at the top of this page. Do not be late!
Come up with another whistle and implement it. A whistle is something that extends the use of one of the things you are already doing. It is part of the basic model construction, but extended or cloned and modified in an interesting way. Ask your TAs to make sure this whistle is valid.
Add some widgets that control adjustable parameters to your model so that you can create individual-looking instances of your character. Try to make these actually different individuals, not just "the red guy" and "the blue guy."
Enhance the
required spline options. Some of these will require alterations to the user
interface, which involves learning Fluid and the UI framework. If you
want to access mouse events in the graph window, look at the handle
function in the GraphWidget class. Also, look at the Curve
class to see what control point manipulation functions are already
provided. These could be helpful, and will likely give you a better
understanding of how to modify or extend your program's behavior. A
maximum of 3 whistles will be given out in this category.
Let the user control the tension of the Catmull-Rom spline.
Implement
one of the standard subdivision curves (e.g., Lane-Riesenfeld or
Dyn-Levin-Gregory).
Add
options to the user interface to enforce C1 or C2
continuity between adjacent Bezier curve segments automatically. (It
should also be possible to override this feature in cases where you don't
want this type of continuity.)
Add
the ability to add a new control point to any curve type without changing
the curve at all.
The linear curve code provided in the skeleton can be "wrapped," which means that the curve has C0 continuity between the end of the animation and the beginning. As a result, looping the animation does not result in abrupt jumps. You will be given a whistle for each (nonlinear) curve that you wrap.
Render a mirror in your scene. As you may already know, OpenGL has no built-in reflection capabilities. You can simulate a mirror with the following steps: 1) Reflect the world about the mirror's plane, 2) Draw the reflected world, 3) Pop the reflection about the mirror plane from your matrix stack, 4) Draw your world as normal. After completing these steps, you may discover that some of the reflected geometry appears outside the surface of the mirror. For an extra whistle you can clip the reflected image to the mirror's surface, you need to use something called the stencil buffer. The stencil buffer is similar to a Z buffer and is used to restrict drawing to certain portions of the screen. See Scott Schaefer's site for more information. In addition, the NeHe game development site has a detailed tutorial
Modify your particle system so that the particles' velocities gets initialized with the velocity of the hierarchy component from which they are emitted. The particles may still have their own inherent initial velocity. For example, if your model is a helicopter with a cannon launching packages out if it, each package's velocity will need to be initialized to the sum of the helicopter's velocity and the velocity imparted by the cannon.
Particles rendered as points or spheres may not look that realistic. You can achieve more spectacular effects with a simple technique called billboarding. A billboarded quad (aka "sprite") is a textured square that always faces the camera. See the sprites demo. For full credit, you should load a texture with transparency (sample textures), and turn on alpha blending (see this tutorial for more information). Hint: When rotating your particles to face the camera, it's helpful to know the camera's up and right vectors in world-coordinates.
Use the
billboarded quads you implemented above to render the following effects.
Each of these effects is worth one whistle provided you have put in a whistle
worth of effort making the effect look good.
Fire (example) (You'll probably want to use
additive blending for your particles -glBlendFunc(GL_SRC_ALPHA,GL_ONE);
)
Use environment mapping to simulate a reflective material. This technique is particularly effective at faking a metallic material or reflective, rippling water surface. Note that OpenGL provides some very useful functions for generating texture coordinates for spherical environment mapping. Part of the challenge of this whistle is to find these functions and understand how they work.
Add baking to your particle system. For simulations that are expensive to process, some systems allow you to cache the results of a simulation. This is called "baking." After simulating once, the cached simulation can then be played back without having to recompute the particle properties at each time step. See this page for more information on how to implement particle baking.
Implement a motion blur effect (example). The easy way to implement motion blur is using an accumulation buffer - however, consumer grade graphics cards do not implement an accumulation buffer. You'll need to simulate an accumulation buffer by rendering individual frames to a texture, then combining those textures. See this tutorial for an example of rendering to a texture.
Implement the "Hitchcock Effect" described in class, where the camera zooms in on an object, whilst at the same time pulling away from it (the effect can also be reversed--zoom out and pull in). The transformation should fix one plane in the scene--show this plane. Make sure that the effect is dramatic--adding an interesting background will help, otherwise it can be really difficult to tell if it's being done correctly.
Euler's method is a very simple technique for solving the system of differential equations that defines particle motion. However, more powerful methods can be used to get better, more accurate results. Implement your simulation engine using a higher-order method such as the Runge-Kutta technique. ( Numerical Recipes, Sections 16.0, 16.1) has a description of Runge-Kutta and pseudo-code.
Build a complex shape as a set of polygonal faces, using triangles (either the provided primitive or straight OpenGL triangles) to render it. Examples of things that don't count as complex: a pentagon, a square, a circle. Examples of what does count: dodecahedron, 2D function plot (z = sin(x2 + y)), etc.
Implement adaptive Bezier curve generation: Use a recursive, divide-and-conquer, de Casteljau algorithm to produce Bézier curves, rather than just sampling them at some arbitrary interval. You are required to provide some way to change the flatness parameter and maximum recursion depth, with a keystroke or mouse click. In addition, you should have some way of showing (a printf statement is fine) the number of points generated for a curve to demonstrate your adaptive algorithm at work.
To get an extra whistle, provide visual controls in the UI (i.e. sliders) to modify the flatness parameter and maximum recursion depth, and also display the number of points generated for each curve in the UI.
Extend the particle system to handle springs. For example, a pony tail can be simulated with a simple spring system where one spring endpoint is attached to the character's head, while the others are floating in space. In the case of springs, the force acting on the particle is calculated at every step, and it depends on the distance between the two endpoints. For one more bell, implement spring-based cloth. For 2 more bells, implement spring-based fur. The fur must respond to collisions with other geometry and interact with at least two forces like wind and gravity.
Allow for particles to bounce off each other by detecting collisions when updating their positions and velocities. Although it is difficult to make this very robust, your system should behave reasonably.
Implement a "general" subdivision curve, so the user can specify an arbitrary averaging mask You will receive still more credit if you can generate, display, and apply the evaluation masks as well. There's a site at Caltech with a few interesting applets that may be useful.
A display list is a "recording" of OpenGL calls that gets stored on the graphics card. Thus, display lists allow you to render complicated polygons much more quickly because you only have to tell the graphics card to replay the list of commands instead of sending them across the (slow) computer bus. A display list tutorial can be found here.
Implement a smooth curve functionality. Examples of smooth curves are here. These curves are a great way to lead into swept surfaces (see below). Functional curves will need to be demonstrated in some way. One great example would be to draw some polynomial across a curve that you define. Students who implement swept surfaces will not be given a bell for smooth curves. That bell will be included in the swept surfaces bell. Smooth curves will be an important part of the animator project, so this will give you a leg up on that.
Implement one or more non-linear transformations applied to a triangle mesh. This entails creating at least one function that is applied across a mesh with specified parameters. For example, you could generate a triangulated sphere and apply a function to a sphere at a specified point that modifies the mesh based on the distance of each point from a given axis or origin. Credit varies depending on the complexity of the transformation(s) and/or whether you provide user controls (e.g., sliders) to modify parameters.
Heightfields are great ways to build complicated looking maps and terrains pretty easily.
Add a lens flare. This effect has components both in screen space and world space effect. For full credit, your lens flare should have at least 5 flare "drops", and the transparency of the drops should change depending on how far the light source is from the center of the screen. You do not have to handle the case where the light source is occluded by other geometry (but this is worth an extra whistle).
Perform collision detection with more complicated shapes. For complex scenes, you can even use the accelerated ray tracer and ray casting to determine if a collision is going to occur. Credit will vary with the complexity shapes and the sophistication of the scheme used for collision detection.
If you find something you don't like about the interface, or something you think you could do better, change it! Any really good changes will be incorporated into the next Animator. Credit varies with the quality of the improvement.
Add a function in your model file for drawing a new type of primitive. The following examples will definitely garner two bells; if you come up with your own primitive, you will be awarded one or two bells based on its coolness. Here are twos examples:
(Variable) Use some sort of procedural modeling (such as an L-system) to generate all or part of your character. Have parameters of the procedural modeler controllable by the user via control widgets. In a previous quarter, one group generated these awesome results.
In addition to mood cycling, have your character react differently to UI controls depending on what mood they are in. Again, there is some weight in this item because the character reactions are supposed to make sense in a story telling way. Think about the mood that the character is in, think about the things that you might want the character to do, and then provide a means for expressing and controlling those actions.
Add flocking behaviors to your particles to simulate creatures moving in flocks, herds, or schools. A convincing way of doing this is called "boids" (see here for a demo and for more information). For full credit, use a model for your creatures that makes it easy to see their direction and orientation (for example, the yellow/green pyramids in the boids demo would be a minimum requirement). For up to one more bell, make realistic creature model and have it move realistically according to its motion path. For example, a bird model would flap its wings when it rises, and hold it's wings outstretched when turning.
Implement a C2-Interpolating curve. There is already an entry for it in the drop-down menu. See this handout.
Add the ability to edit Catmull-Rom curves using the two "inner" Bezier control points as "handles" on the interpolated "outer" Catmull-Rom control points. After the user tugs on handles, the curve may no longer be Catmull-Rom. In other words, the user is really drawing a C1 continuous curve that starts off with the Catmull-Rom choice for the inner Bezier points, but can then be edited by selecting and editing the handles. The user should be allowed to drag the interpolated point in a manner that causes the inner Bezier points to be dragged along. See PowerPoint and Illustrator pencil-drawn curves for an example.
Implement picking of a part in the model hierarchy. In other words, make it so that you can click on a part of your model to select its animation curve. To recognize which body part you're picking, you need to first render all body parts into a hidden buffer using only an emissive color that corresponds to an object ID. After modifying the mouse-ing UI to know about your new picking mode, you'll figure out which body part the user has picked by reading out the ID from your object ID buffer at the location where the mouse clicked. This should then trigger the GraphWidget to select the appropriate curve for editing. If you're thinking of doing either of the inverse kinematics (IK) extensions below, this kind of interface would be required.
If you implemented twist for your original model, the camera movement for your old modeler can give some unexpected results. For example, twist your model 90 degrees. Now try to do rotations as normal. This effect is called gimbal lock. Change the camera to use quaternions as a method for avoiding the gimbal lock.
One difficulty with hierarchical modeling using primitives is the difficulty of building "organic" shapes. It's difficult, for instance, to make a convincing looking human arm because you can't really show the bending of the skin and bulging of the muscle using cylinders and spheres. There has, however, been success in building organic shapes using metaballs. Implement your hierarchical model and "skin" it with metaballs. Hint: look up "marching cubes" and "marching tetrahedra" --these are two commonly used algorithms for volume rendering. For an additional bell, the placement of the metaballs should depend on some sort of interactically controllable hierarchy. Try out a demo application.
Metaball Demos:These demos show the use of metaballs within the modeler framework. The first demo allows you to play around with three metaballs just to see how they interact with one another. The second demo shows an application of metaballs to create a twisting snake-like tube. Both these demos were created using the metaball implementation from a past CSE 557 student's project.
Demo 1: Basic Texture Mapped Metaballs
Demo 2: Cool Metaball Snake
Implement projected textures. Projected textures are used to simulate things like a slide projector, spotlight illumination, or casting shadows onto arbitrary geometry. Check out this demo and read details of the effect at glBase, and SGI.
An alternative way to do animations is to transform an already existing animation by way of motion warping (animations). Extend the animator to support this type of motion editing.
If you have a sufficiently complex model, you'll soon realize what a pain it is to have to play with all the sliders to pose your character correctly. Implement a method of adjusting the joint angles, etc., directly though the viewport. For instance, clicking on the shoulder of a human model might select it and activate a sphere around the joint. Click-dragging the sphere then should rotate the shoulder joint intuitively. For the elbow joint, however, a sphere would be quite unintuitive, as the elbow can only rotate about one axis. For ideas, you may want to play with the Maya 3D modeling/animation package, which is installed on the workstations in 228. Credit depends on quality of implementation.
Another method to build organic shapes is subdivision surfaces. Implement these for use in your model. You may want to visit this to get some starter code.
We've talked about rigid-body simulations in class. Incorporate this functionality into your program, so that you can correctly simulate collisions and response between rigid objects in your scene. You should be able to specify a set of objects in your model to be included in the simulation, and the user should have the ability to enable and disable the simulation either using the existing "Simulate" button, or with a new button.
The hierarchical model that you created is controlled by forward kinematics; that is, the positions of the parts vary as a function of joint angles. More mathematically stated, the positions of the joints are computed as a function of the degrees of freedom (these DOFs are most often rotations). The problem is inverse kinematics is to determine the DOFs of a model to satisfy a set of positional constraints, subject to the DOF constraints of the model (a knee on a human model, for instance, should not bend backwards).
This is a significantly harder problem than forward kinematics. Aside from the complicated math involved, many inverse kinematics problems do unique solutions. Imagine a human model, with the feet constrained to the ground. Now we wish to place the hand, say, about five feet off the ground. We need to figure out the value of every joint angle in the body to achieve the desired pose. Clearly, there are an infinite number of solutions. Which one is "best"?
Now imagine that we wish to place the hand 15 feet off the ground. It's fairly unlikely that a realistic human model can do this with its feet still planted on the ground. But inverse kinematics must provide a good solution anyway. How is a good solution defined?
Your solver should be fully general and not rely on your specific model (although you can assume that the degrees of freedom are all rotational). Additionally, you should modify your user interface to allow interactive control of your model though the inverse kinematics solver. The solver should run quickly enough to respond to mouse movement.
If you're interested in implementing this, you will probably want to consult
the
The primitives that you are using in your model are all built from simple two dimensional polygons. That's how most everything is handled in the OpenGL graphics world. Everything ends up getting reduced to triangles.
Building a highly detailed polygonal model often requires millions of triangles. This can be a huge burden on the graphics hardware. One approach to alleviating this problem is to draw the model using varying levels of detail. In the modeler application, this can be done by specifying the quality (poor, low, medium, high). This unfortunately is a fairly hacky solution to a more general problem.
First, implement a method for controlling the level of detail of an arbitrary polygonal model. You will probably want to devise some way of representing the model in a file. Ideally, you should not need to load the entire file into memory if you're drawing a low-detail representation.
Now the question arises: how much detail do we need to make a visually nice image? This depends on a lot of factors. Farther objects can be drawn with fewer polygons, since they're smaller on screen. See Hugues Hoppe's work on View-dependent refinement of progressive meshes for some cool demos of this. Implement this or a similar method, making sure that your user interface supplies enough information to demonstrate the benefits of using your method. There are many other criteria to consider that you may want to use, such as lighting and shading (dark objects require less detail than light ones; objects with matte finishes require less detail than shiny objects).
Many 3D models come in the form of static polygon meshes. That is, all the geometry is there, but there is no inherent hierarchy. These models may come from various sources, for instance 3D scans. Implement a system to easily give the model some sort of hierarchical structure. This may be through the user interface, or perhaps by fitting an model with a known hierarchical structure to the polygon mesh (see this for one way you might do this). If you choose to have a manual user interface, it should be very intuitive.
Through your implementation, you should be able to specify how the deformations at the joints should be done. On a model of a human, for instance, a bending elbow should result in the appropriate deformation of the mesh around the elbow (and, if you're really ambitious, some bulging in the biceps).
Create a character whose physics can be controlled by moving a mouse or pressing keys on the keyboard. For example, moving the mouse up or down may make the knees bend or extend the knees (so your character can jump), while moving it the left or right could control the waist angle (so your character can lean forward or backward). Rather than have these controls change joint angles directly, as was done in the modeler project, the controls should create torques on the joints so that the character moves in very realistic ways. This monster bell requires components of the rigid body simulation extension above, but you will receive credit for both extensions as long as both are fully implemented.. For this extension, you will create a hierarchical character composed of several rigid bodies. Next, devise a way user interactively control your character.
This technique can produce some organic looking movements that are a lot of fun to control. For example, you could create a little Luxo Jr. that hops around and kicks a ball. Or, you could create a downhill skier that can jump over gaps and perform backflips (see the Ski Stunt example below).
SIGGRAPH paper - http://www.dgp.toronto.edu/~jflaszlo/papers/sig2000.pdf
Several movie examples - http://www.dgp.toronto.edu/~jflaszlo/interactive-control.html
Ski Stunt - a fun game that implements this monster bell - Information and Java applet demo - Complete Game (win32)
If you want, you can do it in 2D, like the examples shown in this paper (in
this case you will get full monster bell credit, but half credit for the rigid
body component).