Getting Started


Working from home

When a project is assigned

Project Submission

Submitting your assignment

Submitting your artifact


What is OpenGL?

OpenGL sits between the graphics hardware and the application, providing a standard, cross-platform API for hardware accelerated rendering. There are a few other popular rendering technologies, such as Microsoft’s DirectX, Apple’s Metal, and the more recent Vulkan. But we will be using OpenGL because it is supported on Windows, Mac, and Linux.

In the first project, you will be writing OpenGL code that draws primitives like triangles and lines to emulate brush strokes. In the second project, you will be writing OpenGL shaders that affect describe how the GPU colors your 3d scene and models. At any time, you are free to explore OpenGL on your own and go above and beyond to extend the program to suit your own needs.

How does it work?

Graphics hardware used to be very specific and non-flexible, a fixed function pipeline that output pixels to the screen. But these days, many stages of this pipeline are programmable (shaders) and we can even perform arbitrary massively parallel computations on the GPU.

OpenGL operates like a state machine with global variables. When something is changed, it stays changed until something else changes it again. All the state associated with an instance of OpenGL is stored in an OpenGL Context. However, while the OpenGL API is defined as a standard, the creation of an OpenGL Context is not. Every operating system does it differently, so we want to use a windowing toolkit like Qt or FLTK that handles cross-platform context creation automatically, as well as provide a GUI framework.

Since there is no single implementation of OpenGL, graphics hardware vendors need to supply their own implementation of OpenGL with the graphics drivers, necessitating the step of loading OpenGL functions dynamically. It would be an enormous pain to do this by hand, so instead loading libraries are used such as GLEW.

Graphics hardware does not understand higher level primitives such as circles and rectangles. It only understands how to draw things like points, lines, and triangles. A 3d sphere drawn on the GPU is actually a whole bunch of triangles that approximate a sphere. Each primitive has one (point), two (line), or three (triangle) vertices that you can attach any data you would like, such as a 3d position, or intrinsic color.


The set of all data and data types belonging to a vertex is defined as the Vertex Format. The vertices of these basic primitives are stored in something called a Vertex Buffer, which is really just a GPU allocated piece of memory. You can store all the data for each vertex in separate vertex buffers, for example: one vertex buffer for 3d positions, another for intrinsic colors. Or you can store it interleaved in one giant vertex buffer. Everything required to render a specific set of primitives is saved in a Vertex Array Object.

A lot of OpenGL functions operate on the notion of a global “current” something that is “bound.” For instance, there is a global GL_ARRAY_BUFFER variable which must have a Vertex Buffer bound to it before you can do anything with the GL_ARRAY_BUFFER like upload data to it.

OpenGL manages its own memory. OpenGL objects are represented by unique integers. There are a whole slew of glGenX functions that basically request for a GPU resource to be allocated and you are given its ID in return. However you must also remember to ask OpenGL to delete those resources when they are not used anymore.


If you would like to know more about OpenGL and how it works, you can read a lot more about it on the Khronos OpenGL Wiki.

We provide most of the boilerplate OpenGL code required to run the program because we believe that would take too long and would not be a very good use of time. We would rather that time be spent towards learning higher level graphics concepts and trying out cool bells and whistles. If you would like to learn more about OpenGL, this website goes through how to start drawing things from scratch.


What is Qt?

Qt is a walled garden for cross-platform C++ native application development. We will primarily be using it for its rich windowing toolkit to make powerful UI and manage OpenGL Contexts. Qt comes with two primary tools. Qt Creator is an IDE, and Qt Designer which is integrated into Qt Creator creates UI forms visually. Qt is very popular and widely used in industry and acamedia. You can read more about it and their documentation.

How do I get started?

Simply open Qt and find the .pro project file in the root directory which will open the project and all sub-projects.

It will ask you to create a build-configuration. Simply hit okay and you are ready to go. Select either Debug or Release build mode and hit run to build.

If you ever need to add a file or remove a file, those changes need to be reflect in the .pro file. Doing operations through Qt Creator typically applies those automatically to the .pro file, while operations done outside of Qt require you to manually edit the .pro file, and re-run qmake manually.

C++ Resources

Video Editing with Blender

Blender is a free and open source 3D creation suite. It supports the entirety of the 3D pipeline—modeling, rigging, animation, simulation, rendering, and even video editing which is what this guide will focus on. For our purposes, Blender is actually excessively heavy and confusing at points - if you have access to other editing software that supports compositing image sequences, such as Adobe Premiere or Adobe After Effects, you could also use those. Although, Blender is already installed on the lab machines; if you would like, it is also available to download for free. This guide will go through the basics for creating your animation - we also suggest looking up Blender video-editing tutorials online for more details.

First, set up the settings for the video you're creating:

Next, add your shots as image sequences to your timeline:

When you're ready, export your animation: