Modeler is a 3D modelling program that lets you design and edit 3D scenes, similar to tools such as Maya, 3DS Max, and Blender. In Modeler, you can set up and arrange 3D elements in a scene, as well as determine their appearance based on the scene's lighting conditions. In addition, in Project 4, you will extend this program to animate your scene, that is, specify how the scene's contents move and change over time.
You'll be implementing many of the core elements of this program, spanning the areas of geometric modelling (specifying the basic 3D shapes of objects in the scene), geometry processing (editing or refining the 3D shapes of objects to achieve certain goals), hierarchical scene modelling (specifying the relative arrangement of objects in the scene), and shading (specifying the appearance of the objects). There are five total requirements:
Geometric Modelling: Constructing Surfaces of Revolution
Geometry Processing: Mesh Smoothing
Shading: Blinn-Phong Point Light Shader
Shading: Additional Shaders
Download and get familiar with the Sample Solution for a good idea of how everything works. See Program Usage for details.
If you'd like an overview of what shaders are, visit here.
Implement the features described below. The skeleton code has comments marked with // REQUIREMENT denoting
the locations at which you probably want to add some code.
Surface of Revolution
Write code to create a 3D surface of revolution mesh in the SurfaceOfRevolution::CreateMesh method in scene/components/surfaceofrevolution.cpp. Your shape must:
have appropriate positions for each vertex
have appropriate per-vertex normals
have appropriate texture coordinates for each vertex
have appropriate vertex connectivity (triangle faces)
use the "subdivisions" argument to determine how many bands your surface is sliced into.
Add functionality, including a simple data structure, for filtering a mesh. In particular, for each vertex of a mesh, take a weighted sum of the vertex and its neighbors to produce a new mesh with the same connectivity as the original mesh, but with updated vertex positions. Vertices are neighbors if they share an edge. Your filter weights around a given vertex will be 1 for the vertex and a/N for each neighboring vertex, followed by dividing every filter weight by the sum of all the weights; "a" is a parameter that controls smoothing or sharpening (typically in the range (-1/2,1/2)), and N is the number of neighboring vertices (N is also called the "valence"), and will vary from vertex to vertex. As with image processing, you should not filter in place on the input mesh, but instead read vertex positions from the input mesh and compute new positions to put in the output mesh. You'll add this functionality in MeshProcessing::FilterMesh in the file meshprocessing.cpp.
Note that you'll need to recalculate the per-vertex normals!
To verify your implementation, we suggest you can load Spikey or Bunny mesh, go to [Assets->Mesh Processing->Filter Selected] to apply smoothing and then observe if the result is the same as the solution. To see siginificant and interesting changes, you can use more iterations (e.g. 10) to smooth your mesh.
To load the Spikey or Bunny mesh, you first go to [SceneObject->Create 3D Object->Mesh] to create a new 3D mesh, which is a cube by defaualt. Then you change TriangeMesh::Mesh property as Spikey or Bunny(on the right inspector page).
Your artifact for this assignment will involve creating a hierarchical model. This model must have at least two levels of branching.
There are two parts to this requirement:
Your artifact for this assignment will involve creating a hierarchical model. This model must have at least two levels of branching.
Implement a UI to control the relevant joint transformations of your model.
As part of the project submisson, you need to save your model as YOUR_NETID.yaml, put it in the folder "Editor/assets/model" (you need to create this), and commit to your repository. We will use that model to do grading.
We recommend you can write down the tree diagram of your model to help you figure out what your model will be, and to practice thinking about empty nodes, centers of rotation, and so on. But note the tree diagram is not a requirement.
The UI requirement gives you an easy way to show off your model and makes it much easier to animate. It also will help you learn how to add UI elements to the
Below we offer more examples and information to help you figure out how this works.
Point Light Blinn-Phong Shader
Add support for the scene's point lights, by editing the files Editor/assets/blinn-phong.frag and Editor/assets/blinn-phong.vert. You need to include quadratic distance attenuation. Hint: you may not have to edit one of these files. We also include a few extra shaders including Editor/assets/texture.frag and Editor/assets/texture.vert to provide some basic examples for how to do things like sample from textures. You do not need to worry about these unless you're doing bells and whistles.
ShaderProgram::BuiltinUniforms in the file Engine/src/resource/shaderprogram.h contain a list of uniforms supplied by default to your shader as long as you declare them properly in your shader. GLRenderer::SetUniforms in the file Engine/src/opengl/glrenderer.cpp is where they are actually being passed in.
The default scene donesn't include point light source. So you might see no differences after completing your implentation. We provide another scene that includes point light, located at assets/scene/point_light_scene.pyml, for helping you test the correctness. You just open the scene in both your and the solution program to see if the rendering results are the same.
Create an Additional Shader
Create another shader(s) from the list below that is worth at least 3 whistles (or 1.5 bells). This is required and will not count as extra credit. However, any additional bells or whistles will be considered extra credit. Consult OpenGL Shading Language, "the orange book" for some excellent tips on shaders. Ask your TA's if you would like to implement a shader that isn't listed below. Credit for any shader may be adjusted depending on the quality of the implementation, but any reasonable effort should at least earn you one of the required whistles.
Note You must keep your point light Blinn-Phong shader separate, so we can grade it separately. They (not include the blinn-phong shader) should be renamed as appropirately (e.g. alpha-test[.vert,.frag] and put in Editor/assets folder.
Moreover, we've provided skeleton codes of several shaders for you, including alpha-test.frag, blinn-phong-spotlight.frag, cartoon.frag. You may just edit these skeleton shaders. To test them, we've provided test scenes for you as well, which are in the folder assets/scene.
The Hierarchy tab in the left pane represents all the objects in the current Scene. A child object inherits the transformations applied to the parent object like in a scene graph. Parent-Child relationships can changed by simply dragging the objects around in the pane. You may also find creating Empty objects as parents to be useful in building your model. Double click an object to change its name.
The Assets tab in the left pane represents things like Textures, Shaders, Materials, and Meshes used by the Scene. Materials use Shaders, and shaders use .vert and .frag files. You can either edit them in the right-side Inspector, or choose to create a new asset. Together, the Assets and Scene Graph form a "Scene", and can be saved out or loaded from disk.
Selecting an asset or an object will display their properties on the right side in the Inspector. Here you can change their properties, assign different textures, materials, etc. For scene objects, you can also hide them from the hierarchy by unchecking the box by their name.
Modeler uses a component based system for scene objects. Every object has a "Transform" component that represents their Translation, Rotation, and Scale in 3d space. Most rendered objects will have a "Geometry" that defines the shape of the object, and a "Mesh Renderer" that uses a "Material" asset to define how to render that shape. Lights will have an additional "Light" component that defines the light properties.
Console at the bottom is mostly for debugging purposes, at any time in your code, you can call Debug::Log.WriteLine to print to this console. If you hide the Inspector or any of the other panels in the program, right-click on the tool-bar to show them again.
Scene in the middle is a rendering of your Scene Graph. You can change how its rendered as points, wireframe, or fully shaded via the menu bar along the top of the scene view. If you are having trouble with the orientation of the Perspective view, try switching to an Orthographic view.
RMB: Orbits the camera
Alt + RMB: Rolls the camera
Scroll Wheel / Alt + LMB: Zooms the camera
MMB (Scroll Click): Pan the camera
F: Moves the camera back to center on the selected object
Space: Splits the view into four separate ones
LMB: Scene Manipulation depending on Manipulation Mode
Select: clicking selects the object in the scene
Translate: after clicking on an object, dragging the individual axes will move the object
Rotate: after clicking on an object, dragging the rings will rotate the object
Scale: after clicking on an object, dragging the bars will scale the object
Local / World: Dictates whether the manipulation happens in the object's local scape or world space
Q: switches to Select mode
W: switches to Translate mode
E: switches to Rotate mode
R: switches to Scale mode
Delete / Fn + Backspace: deletes the selected object
Loading a Mesh (.obj/.ply files):
The high-level idea is to import the mesh as an asset, and create a mesh whose TriangleMesh::Mesh property is the mesh asset you just imported.
Go to Assets->Import->Mesh to select the mesh file you want to import in assets folder
Go to SceneObject->Create 3D Object->Mesh to create a new 3D mesh.
The default created mesh is a cube, so you need to change TriangeMesh::Mesh property (on the right inspector page when selecting the mesh) as the mesh asset you just imported.
Creating a new shader:
Go to Assets->Create->Shader Program to create a new shader program
Select your new Shader Program from the Assets pane and edit the properties to point to your .vert and .frag shader files.
Go to Assets->Create->Material to create a new shader program
Edit the properties of the new Material to use your new Shader Program asset. Now you can edit the inputs that go into the shader.
If you change your shaders, you will need to go back into the Shader Program and re-point them towards the edited file.
The Modeler codebase is quite substantive (not so much a skeleton this time around). It's a good idea to get an understanding of what's going on.
Modeler has two major components: the Engine and the UI. For the requirements, you will most likely only be concerned with the Engine unless you attempt a bell or whistle that goes above and beyond what is currently supported. Modeler loads one Scene at a time. Each Scene has an Asset Manager that handles loading all the Assets belonging to the Scene. It also owns all the Scene Objects in the scene, which are stored in a map using unique identifiers.
A Scene Object contains a mixture of Components that define some behaviour. For instance, a Transform Component which defines the Scene Objects transformations + a Point Light Component which defines light properties, makes a Point Light. Components are built from Properties that are able to communicate responsively with the UI, and can be serialized into the file format. A Renderer takes a Scene and does a depth-first traversal of the Scene Objects that comprise the Scene Graph and renders each component that is renderable. It has its own Resource Manager that handles caching GPU versions of assets.
In OpenGL, all scenes are made of primitives like points, lines, and triangles. Most 3d objects are built from a mesh of triangles. In this project, you will implement a surface of revolution by creating a triangle mesh. For each mesh, you define a list of vertices (the points that make up the primitive) with normals and texture coordinates, and an array indices specifying triangles. This is then later used by the OpenGL Renderer through the method glDrawElements. See opengl/glmesh.cpp.
Surface normals are perpendicular to the plane that's tangent to a surface at a given vertex. Surface normals are used for lighting calculations, because they help determine how light reflects off of a surface.
In OpenGL, we often want to approximate smooth shapes like spheres and cylinders using only triangles. One way to make the lighting look smooth is to use the normals from the shape we're trying to approximate, rather than just making them perpendicular to the polygons we draw. This means we calculate the normals for each vertex (per-vertex normals), rather than each face (per-face) normals. Normals are supplied to OpenGL in a giant array in the same order the vertex positions array is built. Shaders allow us to get even smoother lighting, calculating the normals at each pixel. You can compare these methods below:
Texture mapping allows you to "wrap" images around your model by mapping points on an image (called a texture) to vertices on your model. For each vertex, you indicate the coordinate that vertex should apply to as a 2D pair (U, V) or (S, T) where U or S is the X-coordinate and V or T is the Y-coordinate of the point on the texture that should line up with the vertex. UVs are passed as a giant array in the same manner normals and vertex positions are:
Using Textures In Your Model
When you want to use a texture, you'll need to do the following:
Import a texture into your scene
Create a Shader Program that utilizes shaders that sample from textures
Create a Material that uses that Shader Program, and set the textures
Verifying Your Implementation
To verify the correctness of your implementation, follow this steps:
Go to [SceneObject->Create 3D Object->Surface of Revolution], then you will see a mesh called "Surface of Revolution 1" in the scene hierarchical panel.
Select the object your just created, on the right inspector panel change the curve property as "assets/curve/sample_curve_1.apts" or "assets/curve/sample_curve_2.apts". Observe if the output is the same as the solution.
(Optional) You can also design your curve and the rebuild a mesh accordingly. In the solution program (not in your program), open the curve editor [File->Open Curve Editor], follow the instructions to create your curve, and then save the curve by clicking "Save Dense Samples". Finally, go back to step 1 to select the curve file you just saved and create the mesh.
For the artifact, you will create a Hierarchical Model. Create a hierarchy of nodes, a combination of Empty nodes and Shape nodes. In Animator, you will end up animating your Model(or you can create a new one) by manipulating the set of transforms and other properties on all these nodes over time. Hide any properties you do not want exposed with the Inspector's "Edit Properties" on each node.
While it does not have to be a masterpiece, we do require that it has at least two levels of branching. What this means is that if you have a torso, and attach two arms to it, this is one level of branching. The torso splits into each arm. If each arm then has three fingers, that is two levels of branching, since the arm splits into fingers. Note that if you only have one finger, then that does not add additional branching! Below we show two bad (left, middle) and one good (right) examples to help you further understand the "at least two levels of branching" constraint.
Writing down a tree diagram that illustrates the model hierarchy and
describes the transforms that each node has can be very helpful in designing a model. Here is a very barebones example of the tree diagram of a simplest robot arm, consisting of two long boxes with a single joint in the center. Note that this
example only has one level braching, which means it does not have the required amount of branching.
Note you do not have to submit a diagram of your model. This is just a recommendation, not a requirement.
A tree diagram for this model would look like this:
When demoing your model, or when keyframing it for your animation, you don't want to be clicking down into
different hierarchy nodes all the time in order to reach the right properties. You could instead have a
master set of sliders that you can easily manipulate.
For example, in the simple hierarchy above, clicking down into the "A2 Container" node each time you want
to change the joint angle is somewhat tedious (and will only get harder with more complex hierarchies).
For this requirement, you will add some UI elements to control the variables in your hierarchical model
The easiest way to do this is to create an empty node, that is your root node, at the top level of your hierarchy, which will
serve as a container for your Hierarchy UI. Then, you can implement a UI component and attach it to the root node to help you control the model, without tediously going down the tree to apply the transforms.
We've implemented most of Model UI codes for you. As shown in the figure below, you just have to follow the three steps:
Select the root node of your model
Go to [SceneObject->Add Customized Property to selected]. If it works sucessfully, you will see a slider control with propery name "CustomProperty::Angle" on the right Inspector panel (see figure). You can also drag the slider, observe the application output (in QT Creator IDE), then you will see angle values displayed as the console output.
Modify the codes in CustomProp::OnAngleChanged(double angle) in Engine\src\scene\components\cusomprop.cpp. We provide code snippets in the function to guide you how to parse the scene, find components, and change values. It's also very easy to add more control properties in the same class. Taking a closer look at the commented codes in the class CustomProp, you will know how it works.
Another cool part is, after you've followed the above steps to add model UI, you can save and reload the model, and the customized UI component will be reloaded correctly. We also offer two example model files as your reference. One is assets/robotarm.yaml, which contains robot arm model only, and the other is assets/robotarm_with_ui.yaml, which contains the model and a customized UI component "RobotArmProp" attached to the model root node.
A shader is a program that controls the behavior of a piece of the graphics pipeline on your graphics card.
Vertex shaders are run once for each vertex in your model. They transform it into device space (by applying the modelview and projection matrices), and determine what each vertex's properties are.
Fragment (or pixel) shaders are run once for every pixel to determine its color.
Geometry shaders can turn a single point into multiple points. They are useful for advanced modeling, e.g., refining a coarse triangle mesh into a smooth surface with many triangles. They can also be used to visualize vertex normals. But they are not part of this requirement.
Shaders determine how the scene lighting affects the coloring of 3D surfaces. There are two basic kinds of lights:
Point Light - a light that is emitted from a point in world space. The intensity of the light is attenuated (reduced) based on how far it travels from this point. In the physical world, the intensity of a point light decreases with the square of the distance. We can model this with quadratic distance attenuation: dividing the intensity by some function a*r^2+b*r+c, where r is the distance from the light source and a, b, and c are chosen arbitrarily.
Directional Light - a light that always hits a surface from a certain direction, no matter where it is. It has no attenuation.
A shading model determines how the final color of a surface is calculated from a scene's light sources and the object's material. We have provided a shader that uses the Blinn-Phong shading model for scenes with directional lights. See lecture notes for details on the Blinn-Phong shading model.
Note after building your program via QT Creator, you will have two copies of shader files, one is in the project folder and the other is in the binary folder. We basically copy the shaders from the project folder to the binary folder when the program is opened. So when editing the shader file, remember to edit the shader files in the project folder, not the binary folder, since they will be overwritten when the program is lanuched.
To verify your implementation, you need to put a point light source in your scene. We provide a example scene including a point light for you, which is located at assets/scene/point_light_scene.pyml. You can just open the scene in both your and the solution program to see if the rendering results are the same.
These are additional shader ideas that you can create. You are required to create another shader(s) worth at least 3 whistles (or 1.5 bells). Additional bells or whistles are extra credit.
You can use the sample solution Modeler to develop some of these shaders, but others require texture maps to be provided. We have provided shader_textured.frag and shader_textured.vert as reference for you on how to include texture data into your image.
For your reference, in the solution, we provide a very cute shader, called cartoon shader (it is worth 1 bell if you implement this!). You just open the soluton, select a mesh, and change the material as "Toon Material". Check it out to see what it looks like.
Alpha Test Shader
Some geometry has complex silhouettes but flat, planar surfa. ces. Rather than creating numerous triangles, we can use an alpha threshold or color key to discard certain pixels on a texture. This technique is especially useful for foliage.
We have provided you skeleton codes (assets/alpha-test.frag) and test scene (assets/scene/alpha-test-scene.yaml). In our test scene you will need to use alpha threshould to keep/discard pixels. See sample solutuion to get an idea how the result looks like.
Spot Light Shader
Add an additional shader to support spot light sources.
We have provided you skeleton codes (assets/blinn-phone-spotlight.frag) and test scene (assets/scene/spot-light-scene.yaml). Please refer to sample solution to compare your implemetation.
Create a shader that produces a cartoon effect by drawing a limited set of colors and a simple darkening of sillouettes for curved objects based on normal and viewing direction at a pixel. This per-pixel silhouette-darkening approach will work well in some cases around curved surfaces, but not all. Additional credit will be given based on how well the silhouettes are done, and how well the cartoon effect looks.
We have provided you skeleton codes (assets/cartoon.frag) and test scene (assets/scene/cartoon_bunny_scenee.yaml). Please refer to sample solution to compare your implemetation.
Create a shader, and sliders to control it, thatnuses the Schlick approximation to approximate the contribution of the Fresnel factor to the specular reflection of light.
Vertex shaders, instead of transformation matrices, can be used to morph and deform geometry in complex ways, and it can be really efficient since the calculations are run on the GPU. See here, here, and here for examples of interesting geometry deformations done with vertex shaders. And see here for an even more impressive example: the swimming animation is done entirely by manipulating vertex positions in the vertex shader. Add at least one slider that deforms geometry in a useful way by changing vertex positions (and normals, if relevant) within a vertex shader.
Tessellated Procedural Shader
Make a shader that produces an interesting, repeating pattern, such as a brick pattern, without using a texture.
Normal Mapping Shader
This shader uses a texture to perturb the surface normals of a surface to create the illusion of tiny bumps, without introducing additional geometry. Along with the normals, you'll also want your vertex shader to take in tangent and binormal vectors.
Create a shader that produces a diffraction effect when you move around the object.
Create a shader that produces anisotropic specular highlighting, creating a shiny metal appearance. Additionally, add sliders to control the magnitude in 2 perpendicular directions.
Cloud / Noise Shader
Create a shader that uses noise functions (like Perlin noise) to generate clouds. You may not use textures for this shader. Credit depends on the realism of the clouds.
Using Shaders In Your Model
Shader files are loaded, compiled, and linked by ShaderProgram objects. If you want to add a shader:
Go to Assets->Create->Shader Program to create a Shader Program.
Find the new shader in the Assets pane and set the Vertex/Fragment shaders to point your shader files.
Similarly create a new material and set it to use the Shader Program you created.
Tip: If you have an error in your shader code, you do not have to restart modeler. Instead, fix your shader, then just set Shader Program to point to the same shader file again. You can also reload your assets after any changes by clicking Assets->Reload Assets.
Important: Like the rest of your Modeler binary, your shaders must work on the lab machines! Please test it in the lab machines before the due date.
Please follow the general instructions here. More details below:
Additonal shaders submission
Make sure you have included the additional shaders in your submission. They (not include the blinn-phong shader) should be renamed as appropirately (e.g. alaha-test[.vert,.frag] and put in Editor/assets folder. Again, you don't need to rename blinn-phong shader, just rename those additional shaders you implemented.
For the artifact, you will create a Hierarchical Model using modeler.
As described in Hierarchical Modelling, this model must have at
least two levels of branching. Also, please DO NOT download meshes from the internet as part of your model.Each person must submit their own Hierarchical Model!
In order to get credit for the artifact, we ask that you and your partner (if you have one) both save your models as NETID1.yaml and NETID2.yaml, put them in "Editor/assets/model" and push them to the Modeler repository so we can make sure it satisfies the requirements.
Important: Use File->Save Scene As to save your progress as a .yaml file. We are still working on improving and thoroughly testing the Modeler program, and there is currently no undo functionality, so save frequently!
After the project deadline, create and turn-in a short video screencapture (.MP4 format no longer than 30 seconds) of you showcasing your hierarchical model, which will be voted by students to determine the winners. Maybe move the camera around to get some different angles, or move the transform controls to show the hierarchy in action as you move it to a different pose. You can use any video capture software you'd like, although we ask that you please submit a video in mp4 format and a screenshot to go with it. Any video capture software works. One such program is Open Broadcaster. You just need to add a Source (Display or Window Capture), and hit Start Recording after changing some Output settings like where to save it and what format to use.
Bells and Whistles
Bells and whistles are extra extensions that are not required, and will be worth extra credit. You are also encouraged to come up with your own extensions for the project. Run your ideas by the TAs or Instructor, and we'll let you know if you'll be awarded extra credit for them. If you do decide to do something out of the ordinary (that is not listed here), be sure to mention it in a readme.txt when you submit the project.
Come up with another whistle and implement it. A whistle is something that extends the use of one of the things you are already doing. It is part of the basic model construction, but extended or cloned and modified in an interesting way. Ask your TAs to make sure this whistle is valid.
Implement Loop Subdivision for closed, watertight meshes. The skeleton for this bell is already set up in meshprocessing.cpp. For an extra whistle, detect boundary edges on non-watertight meshes and subdivide those appropriately. For an additional bell, add the ability to label a vertex or edge as extraordinary (i.e. preserved on the subdivided mesh) and change the subdivision weights accordingly.
Render a flat mirror in your scene. As you may already know, OpenGL has no
built-in reflection capabilities. To simulate a mirror, you'll want to
reflect the world about the mirror's plane and then draw the reflected world,
before doing the regular scene drawing pass. Use the stencil buffer to make
sure that the reflected geometry is clipped inside the boundaries of the mirror.
The stencil buffer is similar to a Z buffer and is used to restrict drawing to certain portions of
the screen. See Scott Schaefer's
site for more information. In addition, the NeHe game development site has
a detailed tutorial.
Build a complex shape as a set of polygonal faces, using triangles (either the provided primitive or straight OpenGL triangles) to render it. Examples of things that don't count as complex: a pentagon, a square, a circle. Examples of what does count: dodecahedron, 2D function plot (z = sin(x2 + y)), etc. Note that using the dodecahedron primitive (or other primitives apart from triangles) does not meet this requirement.
Implement a smooth curve functionality. Examples of smooth curves are here. These curves are a great way to lead into swept surfaces (see below). Functional curves will need to be demonstrated in some way. One great example would be to draw some polynomial across a curve that you define. Students who implement swept surfaces will not be given a bell for smooth curves. That bell will be included in the swept surfaces bell. Smooth curves will be an important part of the animator project, so this will give you a leg up on that.
Implement one or more non-linear transformations applied to a triangle mesh. This entails creating at least one function that is applied across a mesh with specified parameters. For example, you could generate a triangulated sphere and apply a function to a sphere at a specified point that modifies the mesh based on the distance of each point from a given axis or origin. Credit varies depending on the complexity of the transformation(s) and/or whether you provide user controls (e.g., sliders) to modify parameters.
Heightfields are great ways to build complicated looking maps and terrains pretty easily. Implement a heightfield to generate terrain in an interesting way. You might try generating fractals, or loading a heightfield from an image (i.e., allowing the user to design the height of the terrain by painting the image in an image editor and importing it).
Add a lens flare. This effect has components both in screen space and world
For full credit, your lens flare should have at least 5 flare
"drops", and the transparency of the drops should change depending on
how far the light source is from the center of the screen. You do not
have to handle the case where the light source is occluded by other geometry
(but this is worth an extra whistle).
Add a function in your model file for drawing a new type of primitive. The following examples will definitely garner two bells; if you come up with your own primitive, you will be awarded one or two bells based on its coolness. Here are three examples:
Swept surfaces (this is worth 3 bells) -- given two curves, sweep one profile curve along the path defined by the other. These are also known as "generalized cylinders" when the profile curve is closed. This isn't quite as simple as it may first sound, as it requires the profile curve to change its orientation as it sweeps over the path curve. See this page for some uses of generalized cylinders. This document may be helpful as well, or see the parametric surfaces lecture from a previous offering of this class. You would most likely want to use the same type of curve files as the surface of revolution does. An example would be sweeping a circle along a 2d curve to generate a paper clip.
(Variable) Use some sort of procedural modeling (such as an L-system) to generate all or part of your character. Have parameters of the procedural modeler controllable by the user via control widgets. In a previous quarter, one group generated these awesome results.
Implement projected textures.
Projected textures are used to simulate things like a slide projector,
spotlight illumination, or casting shadows onto arbitrary geometry. Check
out this demo and read details
of the effect at glBase,
Another way to implement real-time shadows is by creating extra geometry in the scene to represent the shadows, based on the silhouettes of objects with respect to light sources. This is called shadow volumes. Shadow volumes can be more accurate than shadow maps, though they can be more resource-intensive, as well. Implement shadow volumes for the objects in your scene. For an extra bell, make it so that shadows work correctly even when your camera is located within a shadow volume.
One difficulty with hierarchical modeling using primitives is the difficulty of building "organic" shapes. It's difficult, for instance, to make a convincing looking human arm because you can't really show the bending of the skin and bulging of the muscle using cylinders and spheres. There has, however, been success in building organic shapes using metaballs. Implement your hierarchical model and "skin" it with metaballs. Hint: look up "marching cubes" and "marching tetrahedra" --these are two commonly used algorithms for volume rendering. For an additional bell, the placement of the metaballs should depend on some sort of interactically controllable hierarchy. Try out a demo application.
Metaball Demos: These demos show the use of metaballs within the modeler framework. The first demo allows you to play around with three metaballs just to see how they interact with one another. The second demo shows an application of metaballs to create a twisting snake-like tube. Both these demos were created using the metaball implementation from a past CSE 457 student's project.
Disclaimer: please consult the course staff before spending any serious time on these. These are all quite difficult (I would say monstrous) and may qualify as impossible to finish in the given time. But they're cool.
The hierarchical model that you created is controlled by forward kinematics; that is, the positions of the parts vary as a function of joint angles. More mathematically stated, the positions of the joints are computed as a function of the degrees of freedom (these DOFs are most often rotations). The problem of inverse kinematics is to determine the DOFs of a model to satisfy a set of positional constraints, subject to the DOF constraints of the model (a knee on a human model, for instance, should not bend backwards).
This is a significantly harder problem than forward kinematics. Aside from the complicated math involved, many inverse kinematics problems do not have unique solutions. Imagine a human model, with the feet constrained to the ground. Now we wish to place the hand, say, about five feet off the ground. We need to figure out the value of every joint angle in the body to achieve the desired pose. Clearly, there are an infinite number of solutions. Which one is "best"?
Now imagine that we wish to place the hand 15 feet off the ground. It's fairly unlikely that a realistic human model can do this with its feet still planted on the ground. But inverse kinematics must provide a good solution anyway. How is a good solution defined?
Your solver should be fully general and not rely on your specific model (although you can assume that the degrees of freedom are all rotational). Additionally, you should modify your user interface to allow interactive control of your model though the inverse kinematics solver. The solver should run quickly enough to respond to mouse movement.
If you're interested in implementing this, you will probably want to consult the CSE558 lecture notes.
View-dependent adaptive polygon meshes
The primitives that you are using in your model are all built from simple two dimensional polygons. That's how most everything is handled in the OpenGL graphics world. Everything ends up getting reduced to triangles.
Building a highly detailed polygonal model often requires millions of triangles. This can be a huge burden on the graphics hardware. One approach to alleviating this problem is to draw the model using varying levels of detail. In the modeler application, this can be done by specifying the quality (poor, low, medium, high). This unfortunately is a fairly hacky solution to a more general problem.
First, implement a method for controlling the level of detail of an arbitrary polygonal model. You will probably want to devise some way of representing the model in a file. Ideally, you should not need to load the entire file into memory if you're drawing a low-detail representation.
Now the question arises: how much detail do we need to make a visually nice image? This depends on a lot of factors. Farther objects can be drawn with fewer polygons, since they're smaller on screen. See Hugues Hoppe's work on View-dependent refinement of progressive meshes for some cool demos of this. Implement this or a similar method, making sure that your user interface supplies enough information to demonstrate the benefits of using your method. There are many other criteria to consider that you may want to use, such as lighting and shading (dark objects require less detail than light ones; objects with matte finishes require less detail than shiny objects).
Hierarchical models from polygon meshes
Many 3D models come in the form of static polygon meshes. That is, all the geometry is there, but there is no inherent hierarchy. These models may come from various sources, for instance 3D scans. Implement a system to easily give the model some sort of hierarchical structure. This may be through the user interface, or perhaps by fitting an model with a known hierarchical structure to the polygon mesh (see this for one way you might do this). If you choose to have a manual user interface, it should be very intuitive.
Through your implementation, you should be able to specify how the deformations at the joints should be done. On a model of a human, for instance, a bending elbow should result in the appropriate deformation of the mesh around the elbow (and, if you're really ambitious, some bulging in the biceps).