Program Usage


The Hierarchy in the left pane represents all the objects in the current Scene. A child object inherits the transformations applied to the parent object like in a scene graph. Parent-Child relationships can changed by simply dragging the objects around in the pane. You may also find creating Empty objects as parents to be useful in building your model. Double click an object to change its name.

The Assets tab in the left pane represents things like Textures, Shaders, Materials, and Meshes used by the Scene. Together, the Assets and Scene Graph form a "Scene", and can be saved out or loaded from disk.

Selecting an asset or an object will display their properties on the right side in the Inspector. Here you can change their properties, assign different textures, materials, etc. For scene objects, you can also hide properties that should not be changed.

Modeler uses a component based system for scene objects. Every object has a "Transform" component that represents their Translation, Rotation, and Scale in 3d space. Most rendered objects will have a "Geometry" that defines the shape of the object, and a "Mesh Renderer" that uses a "Material" asset to define how to render that shape. Lights will have an additional "Light" component that defines the light properties.

Console at the bottom is mostly for debugging purposes, at any time in your code, you can call Debug::Log.WriteLine to print to this console. If you hide the Inspector or any of the other panels in the program, right-click on the tool-bar to show them again.

Scene in the middle is a rendering of your Scene Graph. You can change how its rendered as points, wireframe, or fully shaded. If you are having trouble with the orientation of the Perspective view, try switching to an Orthographic view.

Camera Controls:

Scene Controls:

Creating a new shader:

Skeleton Program


The Modeler codebase is quite substantive (not so much a skeleton this time around). It's a good idea to get an understanding of what's going on.

Modeler Arch

Modeler has two major components: the Engine and the UI. For the requirements, you will most likely only be concerned with the Engine unless you attempt a bell or whistle that goes above and beyond what is currently supported. Modeler loads one Scene at a time. Each Scene has an Asset Manager that handles loading all the Assets belonging to the Scene. It also owns all the Scene Objects in the scene, which are stored in a map using unique identifiers. A Scene Object contains a mixture of Components that define some behaviour. For instance, a Transform Component which defines the Scene Objects transformations + a Point Light Component which defines light properties, makes a Point Light. Components are built from Properties that are able to communicate responsively with the UI, and can be serialized into the file format. A Renderer takes a Scene and does a depth-first traversal of the Scene Objects that comprise the Scene Graph and renders each component that is renderable. It has its own Resource Manager that handles caching GPU versions of assets.

For a more in-depth explanation on the codebase, please visit this document.

Surface of Revolution


In OpenGL, all scenes are made of primitives like points, lines, and triangles. Most 3d objects are built from a mesh of triangles. In this project, you will implement a surface of revolution by creating a triangle mesh. For each mesh, you define a list of vertices (the points that make up the primitive) with normals and texture coordinates, and an array indices specifying triangles. This is then later used by the OpenGL Renderer through the method glDrawElements. See opengl/glmesh.cpp.

Surface Normals

Surface normals are perpendicular to the plane that's tangent to a surface at a given vertex. Surface normals are used for lighting calculations, because they help determine how light reflects off of a surface.

In OpenGL, we often want to approximate smooth shapes like spheres and cylinders using only triangles. One way to make the lighting look smooth is to use the normals from the shape we're trying to approximate, rather than just making them perpendicular to the polygons we draw. This means we calculate the normals for each vertex (per-vertex normals), rather than each face (per-face) normals. Normals are supplied to OpenGL in a giant array in the same order the vertex positions array is built. Shaders allow us to get even smoother lighting, calculating the normals at each pixel. You can compare these methods below:

Per-face
Per-face
Per-vertex
Per-vertex
Per-pixel
Per-pixel

Texture Mapping

Texture mapping allows you to "wrap" images around your model by mapping points on an image (called a texture) to vertices on your model. For each vertex, you indicate the coordinate that vertex should apply to as a 2D pair (U, V) or (S, T) where U or S is the X-coordinate and V or T is the Y-coordinate of the point on the texture that should line up with the vertex. UVs are passed as a giant array in the same manner normals and vertex positions are:

Using Textures In Your Model

When you want to use a texture, you'll need to do the following:

  1. Import a texture into your scene

  2. Create a Shader Program that utilizes shaders that sample from textures

  3. Create a Material that uses that Shader Program, and set the textures

Hierarchical Modelling


Model Requirements

For the artifact, you will create a Hierarchical Model. Create a hierarchy of nodes, a combination of Empty nodes and Shape nodes. In Animator, you will end up animating your Model(or you can create a new one) by manipulating the set of transforms and other properties on all these nodes over time. Hide any properties you do not want exposed with the Inspector's "Edit Properties" on each node.

While it does not have to be a masterpiece, we do require that it has at least two levels of branching. What this means is that if you have a torso, and attach two arms to it, this is one level of branching. The torso splits into each arm. If each arm then has three fingers, that is two levels of branching, since the arm splits into fingers. Note that if you only have one finger, then that does not add additional branching!

Tree Diagram

Your tree diagram should illustrate the hierarchy of nodes that make up your model, and describe the transforms that each node has. The transforms should clearly indicate which values are variable (i.e. can be animated) and which are constant. These diagrams can be hand-drawn or created with a program (LaTex, Powerpoint, or similar). With your initial diagram, also include some visual to help the TA's know what you're trying to model. This can be a rough sketch, a diagram, an real-world image with some lines and annotations on top, or if you're ahead, a screenshot of a rendering.

Here is a very barebones example of the tree diagram requirement. Consider the simplest arm model, consisting of two long boxes with a single joint in the center. Note that this example does not have the required amount of branching.

Simple Hierarchy
An A+ diagram for this model would look like this:
Simple Hierarchy Tree Diagram

You would get full correctness credit for having the same text and hierarchy structure as shown. The color-coding and layout, as well as the A1 and A2 metanodes, make this diagram easier to interpret, and thus push it above and beyond.

Model UI

When demoing your model, or when keyframing it for your animation, you don't want to be clicking down into different hierarchy nodes all the time in order to reach the right properties. You could instead have a master set of sliders that you can easily manipulate.

For example, in the simple hierarchy above, clicking down into the "A2 Container" node each time you want to change the joint angle is somewhat tedious (and will only get harder with more complex hierarchies). For this requirement, you will add some UI elements to control the variables in your hierarchical model transformations.

The easiest way to do this is to create an empty node at the top level of your hierarchy, which will serve as a container for your Hierarchy UI. Then, we can implement a "HierarchyUI" Component (see Skeleton), and add this component to the empty container. Here is a step-by-step overview of how to do this (pay special attention to steps 3 and 4):

  1. Create a new class in Engine/src/scene/components that inherits from Component.
  2. Add the new ComponentType to the enum in enum.h, the string mappings in component.cpp, and the includes in components.h.
  3. Add appropriate properties to the Component for each variable in your hierarchy. Refer to other components (e.g. Engine/src/scene/components/camera.h) for examples. The Editor will automatically create a UI element that controls each property, e.g. a slider for RangeProperty, text box for DoubleProperty, file selector for FileProperty, etc. Look at the property types in Engine/src/properties for the different types.
  4. Add a handler for the ValueChanged signal for each of your properties. This handler should update your hierarchy's transformations since it will be called every time the user changes the value of the property. You can use the SceneObject::FindDescendant function to help. This will probably require you to have passed in the scene root somewhere, e.g. in the constructor.
  5. Implement the rest of the Component interface by implementing the Component::GetType, Component::GetBaseType, and Component::DuplicateComponent functions. These should all be trivial.
  6. Find a way to actually add this component to a node in your scene. One way to do this is to have the user manually add the HierarchyUI to a node. To do this, create a QAction in MainWindow::CreateSceneObjectsActions that adds a component to the selected node (see the example for the QAction add_envmap_action), and then add the QAction to the appropriate menu in MainWindow::CreateMenus

Using this HierarchyUI, you can also have more complex dependencies between node transformations. For example, if you want to use the same variable to control multiple joints symmetrically, you can do that in the OnValueChanged function.

If you want to implement some UI elements that don't classify as one of the existing properties (e.g. a button to execute an action), then you might have to implement a new Property type. Here are some places to look:

This HierarchyUI component is a very specialized component, since it is dependent on a predefined scene structure. Usually, components are self contained and function specifically on the node itself.

Blinn-Phong Shader


A shader is a program that controls the behavior of a piece of the graphics pipeline on your graphics card.

Shaders determine how the scene lighting affects the coloring of 3D surfaces. There are two basic kinds of lights:

A shading model determines how the final color of a surface is calculated from a scene's light sources and the object's material. We have provided a shader that uses the Blinn-Phong shading model for scenes with directional lights. . See lecture notes for details on the Blinn-Phong shading model.

Shadow Mapping

Shadow mapping involves two steps: Constructing the shadow maps, and computing shadowing terms.

Our framework includes a prerender pass each frame, where we compute one shadow map for each point light in the scene. Specifically, the prerender pass traverses the hierarchy looking for nodes with EnvironmentMap components. An EnvironmentMap component indicates that we want to store some sort of cubemap around the node center. The prerender pass will then render the scene (minus the node itself and its children) 6 times, one for each face of the cube, so that every outgoing direction from the node center is covered. The faces of this cubemap are oriented according to the world coordinate system.

This EnvironmentMap component can be used for environment mapping as well as simple reflection and refraction effects. You can set the resolution of each face of the environment map, as well as the near plane and far plane that it renders from. Most importantly for shadow mapping, you can also set the Render Material for the cubemap. If the Render Material is set, then every object in the scene will be rendered to the cubemap using that set of shaders instead of their own default shader. For shadowmaps, you want to store not the actual lit appearance of the scene in each direction, but the distance of the nearest object from the cubemap center in each direction. You will need to write this depth shader.

The program assumes that any EnvironmentMaps for point light nodes are shadowmaps, and will automatically pass them into your Blinn-Phong shader under uniform samplerCube point_light_shadowmaps[4]. In your lighting code, you will then have to compute the distance to the point light and then compare it to the distance stored in the shadowmap; if the point you're lighting is farther away from the light than the value stored in the shadowmap, then your point is in shadow. Note that texture() lookups from cubemaps are simply done using a direction (in the cubemap's coordinate system, which we have enforced to match the world coordinate system). See this page for more details (ignore the discussion about gsamplerCubeShadow for this project).

To prevent a surface from shadowing itself due to precision errors, you should create a user-controllable shadow bias. The surface should be unshadowed if the distance stored in the shadowmap is nearly equal to (i.e. within this bias amount from) the distance from the shaded point to the light.

To actually see some shadows, you will then have to perform the following steps:

  1. Create your depth-mapping fragment and vertex shaders
  2. Create a new shader (Assets > Create > Shader Program) and point it to your fragment and vertex shaders
  3. Create a new material (Assets > Create > Material) and point it to the new Shader Program
  4. Add a point light to the scene
  5. Select the point light's node, then click the SceneObject menu and select "Add EnvironmentMap to selected"
  6. With the point light selected, look at the insepctor, under which you should see the EnvironmentMap properties. Set these as appropriate, making sure to point the RenderMaterial to your newly created material
  7. Ensure that your shadow receivers are using a material that has your shadow-mapping code in it (just using your blinn-phong material should work).

You can automate most of this (as is done in the reference solution), such that point lights are always created (Editor/src/mainwindow.cpp) with an EnvironmentMap component pointing to an automatically loaded (Engine/src/assets/assetmanager.h) DepthMap material.

Additional Shaders


These are additional shader ideas that you can create. You are required to create another shader(s) worth at least 3 whistles (or 1.5 bells). Additional bells or whistles are extra credit.

You can use the sample solution Modeler to develop some of these shaders, but others require texture maps to be provided. We have provided shader_textured.frag and shader_textured.vert as reference for you on how to include texture data into your image.

See below for instructions on how to use these in your model.

Alpha Test Shader

Some geometry has complex silhouettes but flat, planar surfaces. Rather than creating numerous triangles, we can use an alpha threshold or color key to discard certain pixels on a texture. This technique is especially useful for foliage. For an extra whistle, make it cast alpha-tested shadows too.

Reflection Shader

Create a shader that simulates a reflection effect by determining a reflected ray direction and looking up that ray in an environment map. To implement this, note that any geometry node with an EnvironmentMap component will provide that cubemap to its shaders via uniform samplerCube environment_map. See the shadow mapping section above for more details on obtaining the environment map. Regarding physical accuracy, the same caveat about the distant scene assumption for refraction (see below) applies here.

Refraction Shader

Create a shader that simulates a refraction effect by determining a refracted ray direction and looking up that ray in an environment map. See the shadow mapping section above for more details on obtaining the environment map.

Note that there are two reasons why this isn't an accurate refraction shader. First, a true refracted ray should get refracted once upon entering the object and once upon leaving it, while here we can only refract the ray once. Second, the environment map is only an approximation (a so-called "distant scene approximation") of the incident light on the object. For example, a ray pointing in direction v at the top of the object will look up the same environment map value as a ray pointing in direction v at the bottom of the object, but the true incident light along those rays will only be equal if the surface in direction v is infinitely far away from the object center.

Spot Light Shader

Create a shader that supports a spot light source, and add a third light source to your Modeler. We should be able to adjust the spot light parameters via the UI.

Cartoon Shader

Create a shader that produces a cartoon effect by drawing a limited set of colors and a simple darkening of sillouettes for curved objects based on normal and viewing direction at a pixel. This per-pixel silhouette-darkening approach will work well in some cases around curved surfaces, but not all. Additional credit will be given based on how well the silhouettes are done, and how well the cartoon effect looks.

Schlick Shader

Create a shader, and sliders to control it, thatnuses the Schlick approximation to approximate the contribution of the Fresnel factor to the specular reflection of light.

Vertex Shader

Vertex shaders, instead of transformation matrices, can be used to morph and deform geometry in complex ways, and it can be really efficient since the calculations are run on the GPU. See here, here, and here for examples of interesting geometry deformations done with vertex shaders. And see here for an even more impressive example: the swimming animation is done entirely by manipulating vertex positions in the vertex shader. Add at least one slider that deforms geometry in a useful way by changing vertex positions (and normals, if relevant) within a vertex shader.

Tessellated Procedural Shader

Make a shader that produces an interesting, repeating pattern, such as a brick pattern, without using a texture.

Normal Mapping Shader

This shader uses a texture to perturb the surface normals of a surface to create the illusion of tiny bumps, without introducing additional geometry. Along with the normals, you'll also want your vertex shader to take in tangent and binormal vectors.

Diffraction Shader

Create a shader that produces a diffraction effect when you move around the object.

x2 Anisotropic Shader

Create a shader that produces anisotropic specular highlighting, creating a shiny metal appearance. Additionally, add sliders to control the magnitude in 2 perpendicular directions.

x3 Cloud / Noise Shader

Create a shader that uses noise functions (like Perlin noise) to generate clouds. You may not use textures for this shader. Credit depends on the realism of the clouds.

Using Shaders In Your Model

Shader files are loaded, compiled, and linked by ShaderProgram objects. If you want to add a shader:

  1. Go to Assets->Create->Shader Program to create a Shader Program.

  2. Find the new shader in the Assets pane and set the Vertex/Fragment shaders to point your shader files.

  3. Similarly create a new material and set it to use the Shader Program you created.

Tip: If you have an error in your shader code, you do not have to restart modeler. Instead, fix your shader, then just set Shader Program to point to the same shader file again.