The goal of the project was to extend the existing framework of the “Raytracer”-Project to handle photon-maps. This allows the user to produce effects that cannot be generated with conventional ray tracing, such as Caustics, Indirect Illumination and Volumetric Scattering of light, which leads to more realistic renderings. Compared to “traditional” ray tracing, where we only throw viewing rays into the scene, photon mapping uses a tow-pass rendering approach. In the first pass, photons are traced from light sources into the scene and get deposited after having interacted with objects in the scene. In the second pass, viewing rays are traced into the scene and the shading color of an intersection is determined based on the number and color of the photons in its neighborhood.
Our main references for this work were the book “Realistic Image Synthesis using Photon Mapping” and the paper "A Practical Guide to Ray Tracing and Photon Mapping", both by Henrik Wann Jensen.
We also use the kd-tree of the “ANN - Approximate Nearest Neighbor”-Library as the data structure to store our photon map.
To simulate the transport of light through the scene, every light source has to “throw” a large number of photons into the scene. It is important that the total light energy in the scene does not depend on the number of photons.
Our approach:
We are setting a global number of photons that we want to throw into the scene and then divide these up between the light sources in the scene. We implemented the throwing of photons for point- and directional lights. For point lights, we sample the direction of photons randomly from the surface of a unit sphere. The intensity of each photon is then given by the intensity of the light divided by the number of photons.
For Directional light, we need to sample for the origin of photons. We do so by projecting the bounding box of the scene onto a plane outside of the scene that is perpendicular to the directional light. The intensity P of each photon is then , where L is the intensity of the light, A is the area of the projection plane and N the number of photons.
The photons are recursively traced similar to viewing-rays and once they hit a material get either reflected, refracted or absorbed, depending on the material properities.
Our approach:
When tracing photons, we use a technique proposed by Jensen called “Russian roulette”. Instead of lowering the intensity of a photon after it hit a surface and throwing several new photons, we probabilistically decide whether we want to apply diffuse reflection, specular reflection, refraction or absorption to the photon based on the given parameters of the material.
Example:
Let’s consider a surface that has a transmissive and a reflective component. We take the average value of the transmissive RGB-values and the average value of the reflective RGB-values . Based on a random number between 0 and 1, we decide what to do with the photon:
transmission
reflection
absorbtion
If we would instead throw two photons with intensity proportional to and , we would exponentially increase the number of photons, with most of them having very low intensity and not contributing much to the scene. Additionally, the Russian roulette approach helps keeping the intensity of all the photons about the same. This is important for the k-nearest-neighbor-search to work properly.
Since our basic framework did not impose any restrictions to the material properties, we had to make sure that in all our scenes, the color values add up to no more than one. This can be achieved by scaling the properties down below one and increasing light intensity appropriately.
During the tracing-process, photons get deposited throughout the scene. They have to be stored with their position, their direction and their intensity/color. Since it is important for the shading equation to locate photons in the neighborhood of the ray-intersection, a good data structure for the photon-map is crucial. A kd-tree is often used for this purpose.
Our approach:
We store a copy of a photon in a vector, if it hits a diffuse surface or gets scattered or absorbed by the participating medium. Our tracePhoton function keeps track of the type of photon we are dealing with. We distinguish tree different types of photons:
- “Caustic” Photons are photons that have only been reflected or refracted
- “Indirect” Photons have at least once been reflected at a diffuse or specular surface
- “Scattered” Photons have been scattered or absorbed in a volumetric material
In the next step we use our stored photons to create a separate ANNkd-tree for every type of photon. This allows us to define different scaling-factors as well as different k-values for our k-nearest neighbor search for caustics, indirect illumination and volumetric scattering.
The density of the photon-map has to be visualized in the shading equation. A common approach is to look up the radius of the n closest photons to the intersection point we want to shade and then calculate the intensity depending on the radius (the smaller the radius, the larger the intensity).
Our approach:
We decided to keep our traditional ray tracing approach for direct illumination and use the photon maps only for the additional effects mentioned above. The reason not to calculate everything based on the photon map is, that the results tend to be quite noisy, unless a very high number of photons is used.
To visualize our photon maps, we modify our shading equation to incorporate these effects. We use the point of intersection and search for photons close to it. For this we use the ANN nearest neighbor search ( where n is the size of the tree and K is the number of nearest photons in the kd tree] and search for the K closest neighbors. The radius of the farthest of these K photons is then used to estimate the intensity, with intensity being proportional to the inverse of the square of the radius. The color components are modified depending on the average color of the photons.
For caustics, it is better to use a rather small k to in order to see the sharp edges of the caustic, for indirect illumination, the k should be rather large to reduce noise.
Caustics are patterns of light on surfaces that get generated either through reflection or refraction of other objects in the scene. Light rays get concentrated on certain spots and produce visually stunning artifacts that we can observe in the real world. For example, caustics can occur on the bottom of a swimming pool, behind a crystal sphere or a glass filled with a liquid or inside a metal ring. Photon mapping is one popular approach to add these phenomena to a raytracer. We store the photons in a map called the Caustic Map. Photons are added to the map, if they undergo one or more reflections and refractions before hitting a diffuse surface where they are stored.
The caustic of a ring on a table
Caustics on the ground of a pool
Indirect illumination is caused by the reflection of light from diffuse surfaces. In the real world, we are very used to indirect illumination, which is for example the reason why we can see things in a room, even though the sunlight from the window does not directly hit them. It is crucial to take care of indirect illumination to create truly photorealistic scenes with a ray tracer. Photon mapping is a very efficient method to simulate indirect illumination. Photons get stored in the indirect-illumination map after their first reflection from a diffuse or specular surface.
Examples of indirect illumination (the indirect illumination in the second image is scaled up to show the effect)
For realistic scenes, other than those shot in vacuum and particularly to get effects like smoke or dust, we need to perform volumetric scattering, i.e. rays from lights can get scattered or absorbed within the media as well. Thus, to incorporate volumetric scattering , we need to have intersections not only with surfaces of objects but within the participating media as well.
We store photons due to these intersections in a separate volume photon map. A method commonly used here based on importance sampling is called Adaptive Ray Marching – where the next intersection location occurs at a distance of
Δx = , where is a random number.
At this location depending on absorption probability , we either absorb the photon, or scatter in a random direction.
Similarly during the ray tracing phase we use adaptive ray marching to get the intensity due to the scattering and add it to our shading equation appropriately scaled.
We were able to create an artifact of a light source that is cut off by a box that is open at the bottom inside a “dusty” Cornell box. Since this turned out to be very time consuming, we only created it at a small resolution but using 3x3 anti-aliasing to reduce the noise.
The cornell-box with a lightcone with and without scattering
The quality of the result depends heavily on the number of photons that are thrown into the scene. Insufficient density of photons on the surface leads to noisy artifacts which generally appear as circular blobs around points where the photons are concentrated. The noise can be reduced by increasing the k-value of the k-nearest-neighbor search as this helps to increase the radius over which we are integrating or throwing more photons into the scene, which leads to better sampling. However, these solutions increase the rendering time substantially.
The cornell-box with artifacts due to a small amount of photons in the scene
As this approach of the k closest photons to a given intersection looks for photons inside a spherical volume around it,it can lead to the selection of photons, that are not on the surface we want to shade, but instead behind it or on neighboring surfaces. This can cause boundaries to be artificially bright as well as brightness to leak from one surface to another.
The factor of the caustic is scaled up to show the effect
[1] Henrik Wann Jensen. Realistic Image Synthesis Using Photon Mapping. ISBN: 1-56881-140-7. AK Peters, 2001.
[2] Henrik Wann Jensen. A Practical Guide to Ray Tracing and Photon Mapping. SIGGRAPH'2004 Course 19, Los Angeles, August 2004.
ANN: A Library for Approximate Nearest Neighbor Searching by David M. Mount and Sunil Arya (Version 1.1.1)
The framework for the Raytracer-Project provided in class, including our extensions