Assignment 3: PathTracer Part 2

Stephanie Claudino Daffara

PathTracer part two dealt mostly with adding and implementing more complicated materials, environment lights, and depth of field to my ray tracer. This project was tough and took many hours of meticulous algorithm studying, understanding, and debugging. In the end I furthered my understanding of how each bounce of light adds color to the scene, how to implement a reflective and refractive material, how to create physically based microfacet materials, how to modify components that result in a shinier or more matte image, environment map lighting, and creating different depths of field by combining aperture and depth of field values. Finally, in the last portion of this project I got a taste of what it is like to work with the GPU and create shaders that render much quicker, and in real time.

Part 1: Mirror and Glass Material

Here I implemented mirror and glass materials. The following sequence of images displays the differences between rendering these materials at different max ray depths. All of these were rendered at 64 samples per pixel and 4 samples per light.

max ray depth 0.
max ray depth 1.

In Max depth 0 there is no light bounce so the only light visible is that directly coming off of the light source. In Max depth 1, since my implementation of delta BSDFs have no direct lighting, since they are zero with probability of 1, then their values only get accumulated through indirect lighting itself. Therefore you can see in max depth 1 that the room is lit up from one bounce, but both spheres have no bounces of light yet.

max ray depth 2.
max ray depth 3.

In Max depth 2 you can start to see the differences between the reflective material (mirror sphere on the left), and the reflective + refractive material (of the glass sphere on the right). Looking at the mirrored sphere you can notice that the first bounce hits the blue wall, the red wall, and the other sphere, and the second bounce then hits the left sphere, this is why we start to see reflection. But note that the ceiling in the reflective sphere is still black. This is because the second bounce that causes the ceiling to light up directly, needs an extra bounce onto the reflective material to capture this detail. The ball on the right seems to almost mimic the reflective material because of the first bounce coming off of the mirrored sphere, and the second bounce hits the glass sphere and enters it, with no light leaving it. Max depth 3 better displays the glass sphere because now there is light leaving the sphere. Also you can now notice the lit up ceiling in both spheres.

max ray depth 4.
max ray depth 5.

In Max depth 4 you can follow the light that (1) Does not affect spheres because of the way way this project dealt with delta BSDFs, (2) hits the glass sphere, (2) exits the sphere, (4) hits the floor and illuminates it. This is why we can now see an illumination on the floor under the glass sphere. The only notable difference in the mirrored sphere is the shadow looking softer, and accurate reflection of the changing glass sphere. In Max depth 5 there is an extra bounce off the floor and onto the wall, creating the light glow off the blue wall as well.

max ray depth 100.

Finally, in max depth 100 we have a very similar image to that of max ray depth 5. I could even argue that it looks more grainy than max ray depth 5, probably because of "overdoing" the number of light bounces necessary to realistically represent this scene.

Part 2: Microfacet Material

In this part of the project I implemented the Microfacet model. Specifically I implemented physically based isotropic rough conductors that only reflect. Here is a sequence of 4 images rendered at 2048 samples per pixel, 1 sample per light, and 5 bounces of light.

alpha set to 0.005
alpha set to 0.05
alpha set to 0.25
alpha set to 0.5

The higher the alpha value is, the rougher, more matte the material looks, and in contrast the lower the alpha value is, the shinier and smoother the surface is. At 0.005 alpha value you can clearly see the reflection of the open side of the box (the black on the dragon's neck), and the blue bouncing off of the right wall, and the red bouncing off the left wall. The white specs is also expected at low values of alpha since it does cause more noise. At 0.05 alpha, the dragon is still very reflective, with the colors of it's surrounding bouncing off it's surface almost like a mirror would. You can see less white specs, but more uniform noise throughout the scene. Alpha set to 0.25 has almost no noise, and has much less reflection. The closer the object is to the dragon, the more reflection it has, and because of this you can see how the black space where the camera is, is almost not reflected anymore off of the dragon. In the last image where alpha is set to 0.5 you can visibly see the matte effect on the surface. There is almost no noise in the image and the reflection sof hte walls are very soft on the dragon's surface.

In the following images I draw a comparison between hemisphere and importance sampling. The sampling rate of both images is at 64 samples per pixel, 1 samples per light, and 5 bounces of light.

hemisphere sampling.
importance sampling.

Since uniform hemisphere sampling converges much slower than importance sampling, at only 64 samples per pixel and 5 bounces of light you can see this effect clearly. Hemisphere sampling produces much more noise than importance sampling when compared with the same amount of samples per pixel and bounces of light.

The following image was creating by modifying the eta and k values on this website. I used the Gold-aluminum intermetallic, aka Purple Plague values to create this effect.

Purple Plague dragon.

Part 3: Environment Map Lights

Up till now we have only been dealing with light sources, but what if there is no light source? What if our light source is infinite? Like the sun shining over a field? In this section of the project I implemented environment map lights that supplies incident radiance from all directions on the sphere.

Here is the probability_debug.png image that outputted from the implementation of the marginal and conditional distributions from sampling the environment map.

probability debug.

Here is how the end result of bunny (without a microfacet material) looks with environment lighting applied. These images use 4 samples per pixel and 64 samples per light:

uniform hemisphere sampling.
importance sampling.

Although usually uniform hemisphere sampling produces more noise than importance sampling, here we do not really get that effect. Since the bunny has a diffuse material, it does not best display the reflection and color of the environment light off of it's surface, and both of these images look pretty similar. One thing to note is that the uniform sampling image is lighter than importance sampling which could be due to less samples effecting the overall color.

The next two images display a similar comparison except between uniform and importance sampling on a microfacet material:

uniform hemisphere sampling.
importance sampling.

There are a couple subtle changes that we can note. The uniform sample seems to have a bit more noise than importance sampling. You can look at the bunny's head and note that the brownish highlight on it looks grainier in uniform sampling, and more smooth in importance sampling. Also, the bunny in uniform seems to be a big lighter than it should in terms of how "bright" the scene looks. It has not completely converged as importance sampling has.

Part 4: Depth of Field

Walk through of indirect lighting function:

In this section of the project I simulated a thin lens enabling a depth of field effect. The following 4 images is a focus stack demonstrating the differences between rendering an image with different depths of focus. All of these are set at a fixed aperture of 0.0883883.

Depth focus at 1.5
Depth focus at 1.7
Depth focus at 1.9
Depth focus at 2.3

The next sequence of 4 pictures demonstrate the difference between rendering images at different aperature sizes. They are all focused at 1.7 depth.

Aperture 1/32
Aperture 1/16
Aperture 1/8
1/2.8

Part 5: Shading

Shader program overview

OpenGL has two shader types: vertex shaders and fragment shaders. A vertex shader applies transforms to vertices, which modifies their geometric properties such as the position and normal vectors. It then writes the final position of the vertex to gl_Position. It also writes other vectors that get used in the fragment shader. The fragment shader processes the fragments that result after rasterization. In the end what we want is a shader program which will compile and link the vertex and fragment shader, where the outputs of the vertex shader becomes the input of the fragment shader.

Geometry enters the vertex shader, and it runs for each vertex in parallel. The varying in the vertex shader, fPosition, fNormal are interpolated by the GPU and sent into the fragment shader. After this interpolation, in the fragment shader we have the position for each fragment (or pixels depending on how you like to think about it) and we compute the proper lighting equation on that fragment. Materials in reality are part of the lighting equation. The "Diffuse" material can be thought of as the color of the object based on the type of lighting.

Blinn-Phong shading model

The Blinn Phong shading model gets implemented in the fragment shader since it is a type of lighting. It is the addition of the ambient + diffuse + specular lighting. The Ambient component is the lighting that is not created by direct rays of light such as a direct or area light. The Diffuse component reflects the direct rays of light that hit the surface evenly. This will create a matte effect. The Specular component is the reflection of the rays of light on the surface in the general direction towards the camera. This effect creates highlights on the surface of the material. The three following images show these three different components separately, and then finally, the three of them together.

Ambient only.
Diffuse only.
Specular only.
Complete Blinn-Phong Shading.

Texture Mapping

Moon Texture map.

Bump mapping vs Displacement mapping

Bump Mapping
Displacement Mapping

The Bump mapping approach does a very good job at detecting where lines and contours are and creating "bumps" to make them look like they are deep. Displacement mapping takes a more interesting approach where it actually displaces the vertices around the contours to give it even more texture. In the images below I modify the mesh coarseness by modifying the vertical and horizontal components in the renderer file. This causes the textures to look smoother, with less volume-looking textures, and kind of distorts what the texture originally looked like.

Bump Mapping
Displacement Mapping.

Custom shader!

Purple Plasma

For this custom shader I first loaded my texture from file into my renderer (aka CPU). Next, from my image I created a gl texture object and uploaded the image data to the gpu. Then I linked the shader sampler to the gpu memory. In the vertex shader I simple map my texture to varying fUv coordinates and pass the gl_Position with an offset calculated by using a mix of sine and cosine functions. Then, in the fragment shader I simply create a texture out of the sampler2D texture and pass that vec4 to my gl_FragColor which applies to the color of my geometry the texture. This custom shader uses most of the example code except for the application of the texture to the geometry itself.

You can access my webGL node application here.