Modeler is a 3D modelling program that lets you design and edit 3D scenes, similar to tools such as Maya, 3DS Max, and Blender. In Modeler, you can set up and arrange 3D elements in a scene, as well as determine their appearance based on the scene's lighting conditions. In addition, in Project 4, you will extend this program to animate your scene, that is, specify how the scene's contents move and change over time.
You'll be implementing many of the core elements of this program, spanning the areas of geometric modelling (specifying the basic 3D shapes of objects in the scene), geometry processing (editing or refining the 3D shapes of objects to achieve certain goals), hierarchical scene modelling (specifying the relative arrangement of objects in the scene), and shading (specifying the appearance of the objects). There are five total requirements:
Geometric Modelling: Constructing Surfaces of Revolution
Geometry Processing: Mesh Smoothing
Hierarchical Modeling
Shading: Blinn-Phong Point Light Shader
Shading: Additional Shaders
Getting Started
Download and get familiar with the Sample Solution for a good idea of how everything works. See Program Usage for details.
If you'd like an overview of what shaders are, visit here.
Lighthouse3d also has some pretty detailed tutorials.
Requirements
Implement the features described below. The skeleton code has comments marked with // REQUIREMENT denoting
the locations at which you probably want to add some code.
Surface of Revolution
Write code to create a 3D surface of revolution mesh in the SurfaceOfRevolution::CreateMesh method in scene/components/surfaceofrevolution.cpp. Your shape must:
have appropriate positions for each vertex
have appropriate per-vertex normals
have appropriate texture coordinates for each vertex
have appropriate vertex connectivity (triangle faces)
use the "subdivisions" argument to determine how many bands your surface is sliced into.
Mesh Smoothing
Add functionality, including a simple data structure, for filtering a mesh. In particular, for each vertex of a mesh, take a weighted sum of the vertex and its neighbors to produce a new mesh with the same connectivity as the original mesh, but with updated vertex positions. Vertices are neighbors if they share an edge. Your filter weights around a given vertex will be 1 for the vertex and a/N for each neighboring vertex, followed by dividing every filter weight by the sum of all the weights; "a" is a parameter that controls smoothing or sharpening (typically in the range (-1/2,1/2)), and N is the number of neighboring vertices (N is also called the "valence"), and will vary from vertex to vertex. As with image processing, you should not filter in place on the input mesh, but instead read vertex positions from the input mesh and compute new positions to put in the output mesh. You'll add this functionality in MeshProcessing::FilterMesh in the file meshprocessing.cpp.
Note that you'll need to recalculate the per-vertex normals!
Hierarchical Modeling
Your artifact for this assignment will involve creating a hierarchical model. This model must have at least two levels of branching.
There are two parts to this requirement; they will not be intensive and are mostly to help you with your artifact!
One week after the project is assigned, submit an initial tree diagram of your planned hierarchical model as well as a visual representing how your model might look. Submit a final tree diagram corresponding to your implementation with your artifact if it has changed.
Implement a UI to control the relevant joint transformations of your model.
The tree diagram is to help you figure out what your model will be early, and to practice thinking about empty nodes, centers of rotation, and so on. Your initial
diagram will not be graded for correctness, but will be graded for effort. You should submit an updated diagram with your final artifact that reflects the actual
implementation of your model.
The UI requirement gives you an easy way to show off your model and makes it much easier to animate. It also will help you learn how to add UI elements to the
application.
Point Light Blinn-Phong Shader
Add support for the scene's point lights, by editing the files assets/blinn-phong.vert and assets/blinn-phong.vert. You need to include quadratic distance attenuation. Hint: you may not have to edit one of these files. We also include a few extra shaders including assets/texture.frag and assets/texture.vert to provide some basic examples for how to do things like sample from textures. You do not need to worry about these unless you're doing bells and whistles.
ShaderProgram::BuiltinUniforms in the file resource/shaderprogram.h contain a list of uniforms supplied by default to your shader as long as you declare them properly in your shader. GLRenderer::SetUniforms in the file opengl/glrenderer.cpp is where they are actually being passed in.
Shadow Mapping Shader
Extend your Blinn-Phong shader to handle shadow mapping for point lights. This will involve two steps:
Write a shader that will generate your shadow map.
Modify your Blinn-Phong shader to look up the appropriate location in the shadow map to compute your shadowing term.
Shadow mapping can cause several artifacts such as "shadow acne" and "Peter-panning". You should add in a shadow bias term
to reduce the effects of shadow acne, but fixing other artifacts will be extra credit.
Create an Additional Shader
Create another shader(s) from the list below that is worth at least 3 whistles (or 1.5 bells). This is required and will not count as extra credit. However, any additional bells or whistles will be considered extra credit. You must keep your point light Blinn-Phong shader separate, so we can grade it separately, and you must exhibit this shader in your Modeler binary. Consult OpenGL Shading Language, "the orange book" for some excellent tips on shaders. Ask your TA's if you would like to implement a shader that isn't listed below. Credit for any shader may be adjusted depending on the quality of the implementation, but any reasonable effort should at least earn you one of the required whistles.
Program Usage
The Hierarchy in the left pane represents all the objects in the current Scene. A child object inherits the transformations applied to the parent object like in a scene graph. Parent-Child relationships can changed by simply dragging the objects around in the pane. You may also find creating Empty objects as parents to be useful in building your model. Double click an object to change its name.
The Assets tab in the left pane represents things like Textures, Shaders, Materials, and Meshes used by the Scene. Together, the Assets and Scene Graph form a "Scene", and can be saved out or loaded from disk.
Selecting an asset or an object will display their properties on the right side in the Inspector. Here you can change their properties, assign different textures, materials, etc. For scene objects, you can also hide properties that should not be changed.
Modeler uses a component based system for scene objects. Every object has a "Transform" component that represents their Translation, Rotation, and Scale in 3d space. Most rendered objects will have a "Geometry" that defines the shape of the object, and a "Mesh Renderer" that uses a "Material" asset to define how to render that shape. Lights will have an additional "Light" component that defines the light properties.
Console at the bottom is mostly for debugging purposes, at any time in your code, you can call Debug::Log.WriteLine to print to this console. If you hide the Inspector or any of the other panels in the program, right-click on the tool-bar to show them again.
Scene in the middle is a rendering of your Scene Graph. You can change how its rendered as points, wireframe, or fully shaded. If you are having trouble with the orientation of the Perspective view, try switching to an Orthographic view.
Camera Controls:
Alt + LMB: Orbits the camera
RMB: Moves the camera
Scroll Wheel / Alt + RMB: Zooms the camera
F: Moves the camera back to center on the selected object
Space: Splits the view into four separate ones
Scene Controls:
LMB: Scene Manipulation depending on Manipulation Mode
Select: clicking selects the object in the scene
Translate: after clicking on an object, dragging the individual axes will move the object
Rotate: after clicking on an object, dragging the rings will rotate the object
Scale: after clicking on an object, dragging the bars will scale the object
Local / World: Dictates whether the manipulation happens in the object's local scape or world space
Q: switches to Select mode
W: switches to Translate mode
E: switches to Rotate mode
R: switches to Scale mode
Delete / Fn + Backspace: deletes the selected object
Creating a new shader:
Go to Assets->Create->Shader Program to create a new shader program
Select your new Shader Program from the Assets pane and edit the properties to point to your .vert and .frag shader files.
Go to Assets->Create->Material to create a new shader program
Edit the properties of the new Material to use your new Shader Program asset. Now you can edit the inputs that go into the shader.
If you change your shaders, you will need to go back into the Shader Program and re-point them towards the edited file.
Skeleton Program
The Modeler codebase is quite substantive (not so much a skeleton this time around). It's a good idea to get an understanding of what's going on.
Modeler has two major components: the Engine and the UI. For the requirements, you will most likely only be concerned with the Engine unless you attempt a bell or whistle that goes above and beyond what is currently supported. Modeler loads one Scene at a time. Each Scene has an Asset Manager that handles loading all the Assets belonging to the Scene. It also owns all the Scene Objects in the scene, which are stored in a map using unique identifiers. A Scene Object contains a mixture of Components that define some behaviour. For instance, a Transform Component which defines the Scene Objects transformations + a Point Light Component which defines light properties, makes a Point Light. Components are built from Properties that are able to communicate responsively with the UI, and can be serialized into the file format. A Renderer takes a Scene and does a depth-first traversal of the Scene Objects that comprise the Scene Graph and renders each component that is renderable. It has its own Resource Manager that handles caching GPU versions of assets.
For a more in-depth explanation on the codebase, please visit this document.
Surface of Revolution
In OpenGL, all scenes are made of primitives like points, lines, and triangles. Most 3d objects are built from a mesh of triangles. In this project, you will implement a surface of revolution by creating a triangle mesh. For each mesh, you define a list of vertices (the points that make up the primitive) with normals and texture coordinates, and an array indices specifying triangles. This is then later used by the OpenGL Renderer through the method glDrawElements. See opengl/glmesh.cpp.
Surface Normals
Surface normals are perpendicular to the plane that's tangent to a surface at a given vertex. Surface normals are used for lighting calculations, because they help determine how light reflects off of a surface.
In OpenGL, we often want to approximate smooth shapes like spheres and cylinders using only triangles. One way to make the lighting look smooth is to use the normals from the shape we're trying to approximate, rather than just making them perpendicular to the polygons we draw. This means we calculate the normals for each vertex (per-vertex normals), rather than each face (per-face) normals. Normals are supplied to OpenGL in a giant array in the same order the vertex positions array is built. Shaders allow us to get even smoother lighting, calculating the normals at each pixel. You can compare these methods below:
Per-face
Per-vertex
Per-pixel
Texture Mapping
Texture mapping allows you to "wrap" images around your model by mapping points on an image (called a texture) to vertices on your model. For each vertex, you indicate the coordinate that vertex should apply to as a 2D pair (U, V) or (S, T) where U or S is the X-coordinate and V or T is the Y-coordinate of the point on the texture that should line up with the vertex. UVs are passed as a giant array in the same manner normals and vertex positions are:
Using Textures In Your Model
When you want to use a texture, you'll need to do the following:
Import a texture into your scene
Create a Shader Program that utilizes shaders that sample from textures
Create a Material that uses that Shader Program, and set the textures
Hierarchical Modelling
Model Requirements
For the artifact, you will create a Hierarchical Model. Create a hierarchy of nodes, a combination of Empty nodes and Shape nodes. In Animator, you will end up animating your Model(or you can create a new one) by manipulating the set of transforms and other properties on all these nodes over time. Hide any properties you do not want exposed with the Inspector's "Edit Properties" on each node.
While it does not have to be a masterpiece, we do require that it has at least two levels of branching. What this means is that if you have a torso, and attach two arms to it, this is one level of branching. The torso splits into each arm. If each arm then has three fingers, that is two levels of branching, since the arm splits into fingers. Note that if you only have one finger, then that does not add additional branching!
Tree Diagram
Your tree diagram should illustrate the hierarchy of nodes that make up your model, and
describe the transforms that each node has. The transforms should clearly indicate which
values are variable (i.e. can be animated) and which are constant. These diagrams can be hand-drawn
or created with a program (LaTex, Powerpoint, or similar). With your initial diagram,
also include some visual to help the TA's know what you're trying to model. This can be a
rough sketch, a diagram, an real-world image with some lines and annotations on top, or
if you're ahead, a screenshot of a rendering.
Here is a very barebones example of the tree diagram requirement. Consider the simplest
arm model, consisting of two long boxes with a single joint in the center. Note that this
example does not have the required amount of branching.
An A+ diagram for this model would look like this:
You would get full correctness credit for having the same text and hierarchy structure
as shown.
The color-coding and layout, as well as the A1 and A2 metanodes, make this diagram
easier to interpret, and thus push it above and beyond.
Model UI
When demoing your model, or when keyframing it for your animation, you don't want to be clicking down into
different hierarchy nodes all the time in order to reach the right properties. You could instead have a
master set of sliders that you can easily manipulate.
For example, in the simple hierarchy above, clicking down into the "A2 Container" node each time you want
to change the joint angle is somewhat tedious (and will only get harder with more complex hierarchies).
For this requirement, you will add some UI elements to control the variables in your hierarchical model
transformations.
The easiest way to do this is to create an empty node at the top level of your hierarchy, which will
serve as a container for your Hierarchy UI. Then, we can implement a "HierarchyUI" Component (see
Skeleton), and add this component to the empty container. Here is a step-by-step
overview of how to do this (pay special attention to steps 3 and 4):
Create a new class in Engine/src/scene/components that inherits from Component.
Add the new ComponentType to the enum in enum.h, the string mappings in component.cpp, and the includes in components.h.
Add appropriate properties to the Component for each variable in your hierarchy. Refer to
other components (e.g. Engine/src/scene/components/camera.h) for examples. The Editor will automatically create a UI element
that controls each property, e.g. a slider for RangeProperty, text box for DoubleProperty, file selector for FileProperty, etc.
Look at the property types in Engine/src/properties for the different types.
Add a handler for the ValueChanged signal for each of your properties. This handler should update your hierarchy's transformations
since it will be called every time the user changes the value of the property. You can use the SceneObject::FindDescendant
function to help. This will probably require you to have passed in the scene root somewhere, e.g. in the constructor.
Implement the rest of the Component interface by implementing the
Component::GetType, Component::GetBaseType, and Component::DuplicateComponent functions. These should all
be trivial.
You may also want to override the Component::Serialize function if you would like to be able to save the
property values in your component.
If you wish to do this, you will also need to add your component type into the SceneObject::AddComponent(ComponentType) function.
Find a way to actually add this component to a node in your scene. One way to do this is to have the user manually add the HierarchyUI to
a node. To do this, create a QAction in MainWindow::CreateSceneObjectsActions that adds a component to the selected node
(see the example for the QAction add_envmap_action), and then add the QAction to the appropriate menu in
MainWindow::CreateMenus
Using this HierarchyUI, you can also have more complex dependencies between node transformations.
For example, if you want to use the same variable to control multiple joints symmetrically,
you can do that in the OnValueChanged function.
If you want to implement some UI elements that don't classify as one of the existing properties (e.g. a button to execute an action), then you might have to implement a new Property type. Here are some places to look:
Engine/src/properties contains the Property classes.
Editor/src/widgets/ contains a QWidget for each property type. These widgets all implement InspectableWidget.
Editor/src/inspector/inspector.cpp converts from property type to InspectableWidget.
This HierarchyUI component is a very specialized component, since it is dependent on a predefined scene structure.
Usually, components are self contained and function specifically on the node itself.
Blinn-Phong Shader
A shader is a program that controls the behavior of a piece of the graphics pipeline on your graphics card.
Vertex shaders are run once for each vertex in your model. They transform it into device space (by applying the modelview and projection matrices), and determine what each vertex's properties are.
Fragment (or pixel) shaders are run once for every pixel to determine its color.
Geometry shaders can turn a single point into multiple points. They are useful for advanced modeling, e.g., refining a coarse triangle mesh into a smooth surface with many triangles. They can also be used to visualize vertex normals. But they are not part of this requirement.
Shaders determine how the scene lighting affects the coloring of 3D surfaces. There are two basic kinds of lights:
Point Light - a light that is emitted from a point in world space. The intensity of the light is attenuated (reduced) based on how far it travels from this point. In the physical world, the intensity of a point light decreases with the square of the distance. We can model this with quadratic distance attenuation: dividing the intensity by some function a*r^2+b*r+c, where r is the distance from the light source and a, b, and c are chosen arbitrarily.
Directional Light - a light that always hits a surface from a certain direction, no matter where it is. It has no attenuation.
A shading model determines how the final color of a surface is calculated from a scene's light sources and the object's material. We have provided a shader that uses the Blinn-Phong shading model for scenes with directional lights. . See lecture notes for details on the Blinn-Phong shading model.
Shadow Mapping
Shadow mapping involves two steps: Constructing the shadow maps, and computing shadowing
terms.
Our framework includes a prerender pass each frame, where we compute one
shadow map for each point light in the scene. Specifically, the prerender pass traverses
the hierarchy looking for nodes with EnvironmentMap components. An EnvironmentMap component indicates that we want to store some sort of cubemap around the
node center. The prerender pass will then render the scene (minus the node itself and
its children) 6 times, one for each face of the cube, so that every outgoing direction from
the node center is covered. The faces of this cubemap are oriented according to the world coordinate system.
This EnvironmentMap component can be used for environment mapping as well as simple reflection and refraction effects. You can set the resolution of each face of the environment map, as well as the near plane and far plane that it renders from. Most importantly for shadow mapping, you can also set the Render Material for the cubemap. If the Render Material is set, then every object in the scene will be rendered to the cubemap using that set of shaders instead of their own default shader. For shadowmaps, you want to store
not the actual lit appearance of the scene in each direction, but the distance of the nearest object from the cubemap center in each direction. You will need to write this depth
shader.
The program assumes that any EnvironmentMaps for point light nodes are shadowmaps, and
will automatically pass them into your Blinn-Phong shader under uniform samplerCube
point_light_shadowmaps[4]. In your lighting code, you will then have to compute the
distance to the point light and then compare it to the distance stored in the shadowmap;
if the point you're lighting is farther away from the light than the value stored in the
shadowmap, then your point is in shadow. Note that texture() lookups from
cubemaps are simply done using a direction (in the cubemap's coordinate system, which we
have enforced to match the world coordinate system). See this page for more details (ignore the discussion about gsamplerCubeShadow for this project).
To prevent a surface from shadowing itself due to precision errors, you should
create a user-controllable shadow bias. The surface should be unshadowed if the distance
stored in the shadowmap is nearly equal to (i.e. within this bias amount from) the distance
from the shaded point to the light.
To actually see some shadows, you will then have to perform the following steps:
Create your depth-mapping fragment and vertex shaders
Create a new shader (Assets > Create > Shader Program) and point it to your fragment and vertex shaders
Create a new material (Assets > Create > Material) and point it to the new Shader Program
Add a point light to the scene
Select the point light's node, then click the SceneObject menu and select "Add EnvironmentMap to selected"
With the point light selected, look at the insepctor, under which you should see the EnvironmentMap properties. Set these as appropriate, making sure to point the RenderMaterial to your newly created material
Ensure that your shadow receivers are using a material that has your shadow-mapping code in it (just using your blinn-phong material should work).
You can automate most of this (as is done in the reference solution), such that point lights are always created (Editor/src/mainwindow.cpp) with an EnvironmentMap component pointing to an automatically loaded (Engine/src/assets/assetmanager.h) DepthMap material.
Additional Shaders
These are additional shader ideas that you can create. You are required to create another shader(s) worth at least 3 whistles (or 1.5 bells). Additional bells or whistles are extra credit.
You can use the sample solution Modeler to develop some of these shaders, but others require texture maps to be provided. We have provided shader_textured.frag and shader_textured.vert as reference for you on how to include texture data into your image.
See below for instructions on how to use these in your model.
Alpha Test Shader
Some geometry has complex silhouettes but flat, planar surfaces. Rather than creating numerous triangles, we can use an alpha threshold or color key to discard certain pixels on a texture. This technique is especially useful for foliage. For an extra whistle, make it cast alpha-tested shadows too.
Reflection Shader
Create a shader that simulates a reflection effect by determining a reflected ray direction and looking up that ray in an environment map. To implement this, note that any geometry node with an EnvironmentMap component will provide that cubemap to its shaders
via uniform samplerCube environment_map. See the shadow mapping section above for more details on obtaining the environment map. Regarding physical accuracy, the same caveat about the distant scene assumption for refraction (see below) applies here.
Refraction Shader
Create a shader that simulates a refraction effect by determining a refracted ray direction and looking up that ray in an environment map. See the shadow mapping section above for more details on obtaining the environment map.
Note that there are two reasons why this isn't an accurate refraction shader. First,
a true refracted ray should get refracted once upon entering the object and once upon leaving it, while here we can only refract the ray once. Second, the environment map is only an
approximation (a so-called "distant scene approximation") of the incident light on the object. For example, a ray pointing in direction v at the top of the object will
look up the same environment map value as a ray pointing in direction v at the
bottom of the object, but the true incident light along those rays will only be equal if the surface in direction v is infinitely
far away from the object center.
Spot Light Shader
Create a shader that supports a spot light source, and add a third light source to your Modeler. We should be able to adjust the spot light parameters via the UI.
Cartoon Shader
Create a shader that produces a cartoon effect by drawing a limited set of colors and a simple darkening of sillouettes for curved objects based on normal and viewing direction at a pixel. This per-pixel silhouette-darkening approach will work well in some cases around curved surfaces, but not all. Additional credit will be given based on how well the silhouettes are done, and how well the cartoon effect looks.
Schlick Shader
Create a shader, and sliders to control it, thatnuses the Schlick approximation to approximate the contribution of the Fresnel factor to the specular reflection of light.
Vertex Shader
Vertex shaders, instead of transformation matrices, can be used to morph and deform geometry in complex ways, and it can be really efficient since the calculations are run on the GPU. See here, here, and here for examples of interesting geometry deformations done with vertex shaders. And see here for an even more impressive example: the swimming animation is done entirely by manipulating vertex positions in the vertex shader. Add at least one slider that deforms geometry in a useful way by changing vertex positions (and normals, if relevant) within a vertex shader.
Tessellated Procedural Shader
Make a shader that produces an interesting, repeating pattern, such as a brick pattern, without using a texture.
Normal Mapping Shader
This shader uses a texture to perturb the surface normals of a surface to create the illusion of tiny bumps, without introducing additional geometry. Along with the normals, you'll also want your vertex shader to take in tangent and binormal vectors.
Diffraction Shader
Create a shader that produces a diffraction effect when you move around the object.
x2
Anisotropic Shader
Create a shader that produces anisotropic specular highlighting, creating a shiny metal appearance. Additionally, add sliders to control the magnitude in 2 perpendicular directions.
x3
Cloud / Noise Shader
Create a shader that uses noise functions (like Perlin noise) to generate clouds. You may not use textures for this shader. Credit depends on the realism of the clouds.
Using Shaders In Your Model
Shader files are loaded, compiled, and linked by ShaderProgram objects. If you want to add a shader:
Go to Assets->Create->Shader Program to create a Shader Program.
Find the new shader in the Assets pane and set the Vertex/Fragment shaders to point your shader files.
Similarly create a new material and set it to use the Shader Program you created.
Tip: If you have an error in your shader code, you do not have to restart modeler. Instead, fix your shader, then just set Shader Program to point to the same shader file again.
Important: Like the rest of your Modeler binary, your shaders must work on the lab machines! Please test it in the lab machines before the due date.
Turn-in Information
Please follow the general instructions here. More details below:
Artifact Submission
For the artifact, you will create a Hierarchical Model using modeler.
As described in Hierarchical Modelling, this model must have at
least two levels of branching. Each person must submit their own Hierarchical Model!
Create and turn-in a short video screencapture (.MP4 format no longer than 30 seconds) of you showcasing your hierarchical model. Maybe move the camera around to get some different angles, or move the transform controls to show the hierarchy in action as you move it to a different pose. You can use any video capture software you'd like, although we ask that you please submit a video in mp4 format and a screenshot to go with it. Any video capture software works. One such program is Open Broadcaster. You just need to add a Source (Display or Window Capture), and hit Start Recording after changing some Output settings like where to save it and what format to use.
In order to get credit for the artifact, we ask that you and your partner if you have one both save your models as NETID1.yaml and NETID2.yaml and push them to the Modeler repository so we can make sure it satisfies the requirements.
Important: Use File->Save Scene As to save your progress as a .yaml file. We are still working on improving and thoroughly testing the Modeler program, and there is currently no undo functionality, so save frequently!
Bells and Whistles
Bells and whistles are extra extensions that are not required, and will be worth extra credit. You are also encouraged to come up with your own extensions for the project. Run your ideas by the TAs or Instructor, and we'll let you know if you'll be awarded extra credit for them. If you do decide to do something out of the ordinary (that is not listed here), be sure to mention it in a readme.txt when you submit the project.
Shadowmapped shadows are often aliased. To make shadow edges smoother, interpolate shadow values between shadow map texels.
One way to reduce the "shadow acne" artifact is to render the backfaces of objects into the shadow rather than front faces. This will prevent front faces from self-shadowing (except in very thin objects). Implement this for a whistle.
Come up with another whistle and implement it. A whistle is something that extends the use of one of the things you are already doing. It is part of the basic model construction, but extended or cloned and modified in an interesting way. Ask your TAs to make sure this whistle is valid.
Implement Loop Subdivision for closed, watertight meshes. The skeleton for this bell is already set up in meshprocessing.cpp. For an extra whistle, detect boundary edges on non-watertight meshes and subdivide those appropriately. For an additional bell, add the ability to label a vertex or edge as extraordinary (i.e. preserved on the subdivided mesh) and change the subdivision weights accordingly.
Render a flat mirror in your scene. As you may already know, OpenGL has no
built-in reflection capabilities. To simulate a mirror, you'll want to
reflect the world about the mirror's plane and then draw the reflected world,
before doing the regular scene drawing pass. Use the stencil buffer to make
sure that the reflected geometry is clipped inside the boundaries of the mirror.
The stencil buffer is similar to a Z buffer and is used to restrict drawing to certain portions of
the screen. See Scott Schaefer's
site for more information. In addition, the NeHe game development site has
a detailed tutorial.
Build a complex shape as a set of polygonal faces, using triangles (either the provided primitive or straight OpenGL triangles) to render it. Examples of things that don't count as complex: a pentagon, a square, a circle. Examples of what does count: dodecahedron, 2D function plot (z = sin(x2 + y)), etc. Note that using the dodecahedron primitive (or other primitives apart from triangles) does not meet this requirement.
Implement a smooth curve functionality. Examples of smooth curves are here. These curves are a great way to lead into swept surfaces (see below). Functional curves will need to be demonstrated in some way. One great example would be to draw some polynomial across a curve that you define. Students who implement swept surfaces will not be given a bell for smooth curves. That bell will be included in the swept surfaces bell. Smooth curves will be an important part of the animator project, so this will give you a leg up on that.
Implement one or more non-linear transformations applied to a triangle mesh. This entails creating at least one function that is applied across a mesh with specified parameters. For example, you could generate a triangulated sphere and apply a function to a sphere at a specified point that modifies the mesh based on the distance of each point from a given axis or origin. Credit varies depending on the complexity of the transformation(s) and/or whether you provide user controls (e.g., sliders) to modify parameters.
Heightfields are great ways to build complicated looking maps and terrains pretty easily. Implement a heightfield to generate terrain in an interesting way. You might try generating fractals, or loading a heightfield from an image (i.e., allowing the user to design the height of the terrain by painting the image in an image editor and importing it).
Add a lens flare. This effect has components both in screen space and world
space effect.
For full credit, your lens flare should have at least 5 flare
"drops", and the transparency of the drops should change depending on
how far the light source is from the center of the screen. You do not
have to handle the case where the light source is occluded by other geometry
(but this is worth an extra whistle).
x2
Implement shadow-mapping for directional lights. There are many steps to this,
but you can refer to the point light shadowmapping code for help:
Look for directional lights in the prerender pass (Scene::RenderPrepass()).
Associate a GLRenderableTexture with each of your directional lights; the
equivalent uses of GLRenderableCubemap for point light shadowmaps
are in GLRenderer::RenderEnvMaps() and GLRenderer::SetUniforms()
.
Use an orthographic projection of the scene along the direction of the light. This
will involve deciding on an orientation of your orthographic projection, and determining the
bounds of the scene so that the scene fits into your texture.
Add in the directional light shadowmap as well as any variables defining the orientation
as built-in uniforms, and pass them into your shaders as appropriate (bottom of
GLRenderer::SetUniforms.
In your Blinn-Phong shader, look up the appropriate ray in the shadowmap to determine
shadowing.
x2
Add a function in your model file for drawing a new type of primitive. The following examples will definitely garner two bells; if you come up with your own primitive, you will be awarded one or two bells based on its coolness. Here are three examples:
Swept surfaces (this is worth 3 bells) -- given two curves, sweep one profile curve along the path defined by the other. These are also known as "generalized cylinders" when the profile curve is closed. This isn't quite as simple as it may first sound, as it requires the profile curve to change its orientation as it sweeps over the path curve. See this page for some uses of generalized cylinders. This document may be helpful as well, or see the parametric surfaces lecture from a previous offering of this class. You would most likely want to use the same type of curve files as the surface of revolution does. An example would be sweeping a circle along a 2d curve to generate a paper clip.
x2
(Variable) Use some sort of procedural modeling (such as an L-system) to generate all or part of your character. Have parameters of the procedural modeler controllable by the user via control widgets. In a previous quarter, one group generated these awesome results.
x3
Implement projected textures.
Projected textures are used to simulate things like a slide projector,
spotlight illumination, or casting shadows onto arbitrary geometry. Check
out this demo and read details
of the effect at glBase,
and SGI.
x3
Another way to implement real-time shadows is by creating extra geometry in the scene to represent the shadows, based on the silhouettes of objects with respect to light sources. This is called shadow volumes. Shadow volumes can be more accurate than shadow maps, though they can be more resource-intensive, as well. Implement shadow volumes for the objects in your scene. For an extra bell, make it so that shadows work correctly even when your camera is located within a shadow volume.
x3
One difficulty with hierarchical modeling using primitives is the difficulty of building "organic" shapes. It's difficult, for instance, to make a convincing looking human arm because you can't really show the bending of the skin and bulging of the muscle using cylinders and spheres. There has, however, been success in building organic shapes using metaballs. Implement your hierarchical model and "skin" it with metaballs. Hint: look up "marching cubes" and "marching tetrahedra" --these are two commonly used algorithms for volume rendering. For an additional bell, the placement of the metaballs should depend on some sort of interactically controllable hierarchy. Try out a demo application.
Metaball Demos: These demos show the use of metaballs within the modeler framework. The first demo allows you to play around with three metaballs just to see how they interact with one another. The second demo shows an application of metaballs to create a twisting snake-like tube. Both these demos were created using the metaball implementation from a past CSE 457 student's project.
Disclaimer: please consult the course staff before spending any serious time on these. These are all quite difficult (I would say monstrous) and may qualify as impossible to finish in the given time. But they're cool.
Inverse kinematics
The hierarchical model that you created is controlled by forward kinematics; that is, the positions of the parts vary as a function of joint angles. More mathematically stated, the positions of the joints are computed as a function of the degrees of freedom (these DOFs are most often rotations). The problem of inverse kinematics is to determine the DOFs of a model to satisfy a set of positional constraints, subject to the DOF constraints of the model (a knee on a human model, for instance, should not bend backwards).
This is a significantly harder problem than forward kinematics. Aside from the complicated math involved, many inverse kinematics problems do not have unique solutions. Imagine a human model, with the feet constrained to the ground. Now we wish to place the hand, say, about five feet off the ground. We need to figure out the value of every joint angle in the body to achieve the desired pose. Clearly, there are an infinite number of solutions. Which one is "best"?
Now imagine that we wish to place the hand 15 feet off the ground. It's fairly unlikely that a realistic human model can do this with its feet still planted on the ground. But inverse kinematics must provide a good solution anyway. How is a good solution defined?
Your solver should be fully general and not rely on your specific model (although you can assume that the degrees of freedom are all rotational). Additionally, you should modify your user interface to allow interactive control of your model though the inverse kinematics solver. The solver should run quickly enough to respond to mouse movement.
If you're interested in implementing this, you will probably want to consult the CSE558 lecture notes.
View-dependent adaptive polygon meshes
The primitives that you are using in your model are all built from simple two dimensional polygons. That's how most everything is handled in the OpenGL graphics world. Everything ends up getting reduced to triangles.
Building a highly detailed polygonal model often requires millions of triangles. This can be a huge burden on the graphics hardware. One approach to alleviating this problem is to draw the model using varying levels of detail. In the modeler application, this can be done by specifying the quality (poor, low, medium, high). This unfortunately is a fairly hacky solution to a more general problem.
First, implement a method for controlling the level of detail of an arbitrary polygonal model. You will probably want to devise some way of representing the model in a file. Ideally, you should not need to load the entire file into memory if you're drawing a low-detail representation.
Now the question arises: how much detail do we need to make a visually nice image? This depends on a lot of factors. Farther objects can be drawn with fewer polygons, since they're smaller on screen. See Hugues Hoppe's work on View-dependent refinement of progressive meshes for some cool demos of this. Implement this or a similar method, making sure that your user interface supplies enough information to demonstrate the benefits of using your method. There are many other criteria to consider that you may want to use, such as lighting and shading (dark objects require less detail than light ones; objects with matte finishes require less detail than shiny objects).
Hierarchical models from polygon meshes
Many 3D models come in the form of static polygon meshes. That is, all the geometry is there, but there is no inherent hierarchy. These models may come from various sources, for instance 3D scans. Implement a system to easily give the model some sort of hierarchical structure. This may be through the user interface, or perhaps by fitting an model with a known hierarchical structure to the polygon mesh (see this for one way you might do this). If you choose to have a manual user interface, it should be very intuitive.
Through your implementation, you should be able to specify how the deformations at the joints should be done. On a model of a human, for instance, a bending elbow should result in the appropriate deformation of the mesh around the elbow (and, if you're really ambitious, some bulging in the biceps).