Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Project 3 Robbie Cassidy #15

Open
wants to merge 18 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file added DOF5000.bmp
Binary file not shown.
Binary file added FirstRefraction.bmp
Binary file not shown.
189 changes: 21 additions & 168 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,157 +5,33 @@ Fall 2014

Due Wed, 10/8 (submit without penalty until Sun, 10/12)

## INTRODUCTION
In this project, you will implement a CUDA based pathtracer capable of
generating pathtraced rendered images extremely quickly. Building a pathtracer can be viewed as a generalization of building a raytracer, so for those of you who have taken 460/560, the basic concept should not be very new to you. For those of you that have not taken
CIS460/560, raytracing is a technique for generating images by tracing rays of
light through pixels in an image plane out into a scene and following the way
the rays of light bounce and interact with objects in the scene. More
information can be found here:
http://en.wikipedia.org/wiki/Ray_tracing_(graphics). Pathtracing is a generalization of this technique by considering more than just the contribution of direct lighting to a surface.

Since in this class we are concerned with working in generating actual images
and less so with mundane tasks like file I/O, this project includes basecode
for loading a scene description file format, described below, and various other
things that generally make up the render "harness" that takes care of
everything up to the rendering itself. The core renderer is left for you to
implement. Finally, note that while this basecode is meant to serve as a
strong starting point for a CUDA pathtracer, you are not required to use this
basecode if you wish, and you may also change any part of the basecode
specification as you please, so long as the final rendered result is correct.

## CONTENTS
The Project3 root directory contains the following subdirectories:

* src/ contains the source code for the project. Both the Windows Visual Studio
solution and the OSX and Linux makefiles reference this folder for all
source; the base source code compiles on Linux, OSX and Windows without
modification. If you are building on OSX, be sure to uncomment lines 4 & 5 of
the CMakeLists.txt in order to make sure CMake builds against clang.
* data/scenes/ contains an example scene description file.
* renders/ contains an example render of the given example scene file.
* windows/ contains a Windows Visual Studio 2010 project and all dependencies
needed for building and running on Windows 7. If you would like to create a
Visual Studio 2012 or 2013 projects, there are static libraries that you can
use for GLFW that are in external/bin/GLFW (Visual Studio 2012 uses msvc110,
and Visual Studio 2013 uses msvc120)
* external/ contains all the header, static libraries and built binaries for
3rd party libraries (i.e. glm, GLEW, GLFW) that we use for windowing and OpenGL
extensions

## RUNNING THE CODE
The main function requires a scene description file (that is provided in data/scenes).
The main function reads in the scene file by an argument as such :
'scene=[sceneFileName]'

If you are using Visual Studio, you can set this in the Debugging > Command Arguments section
in the Project properties.
* Make sure to look at sampleScene.txt and pay attention to DOF and APERATURE.

## REQUIREMENTS
In this project, you are given code for:

* Loading, reading, and storing the scene scene description format
* Example functions that can run on both the CPU and GPU for generating random
numbers, spherical intersection testing, and surface point sampling on cubes
* A class for handling image operations and saving images
* Working code for CUDA-GL interop

You will need to implement the following features:
I have so far implemented the following additional features:

* Raycasting from a camera into a scene through a pixel grid
* Diffuse surfaces
* Perfect specular reflective surfaces
* Cube intersection testing
* Sphere surface point sampling
* Stream compaction optimization

You are also required to implement at least 2 of the following features:

* Texture mapping
* Bump mapping
* Depth of field
* Refraction, i.e. glass
* OBJ Mesh loading and rendering
* Interactive camera
* Motion blur
* Subsurface scattering

The 'extra features' list is not comprehensive. If you have a particular feature
you would like to implement (e.g. acceleration structures, etc.) please contact us
first!

For each 'extra feature' you must provide the following analysis :
* overview write up of the feature
* performance impact of the feature
* if you did something to accelerate the feature, why did you do what you did
* compare your GPU version to a CPU version of this feature (you do NOT need to
implement a CPU version)
* how can this feature be further optimized (again, not necessary to implement it, but
should give a roadmap of how to further optimize and why you believe this is the next
step)

## BASE CODE TOUR
You will be working in three files: raytraceKernel.cu, intersections.h, and
interactions.h. Within these files, areas that you need to complete are marked
with a TODO comment. Areas that are useful to and serve as hints for optional
features are marked with TODO (Optional). Functions that are useful for
reference are marked with the comment LOOK.
## ANALYSIS DEPTH OF FIELD
![DOF5000.bmp](https://raw.githubusercontent.com/RTCassidy1/Project3-Pathtracer/master/DOF5000.bmp)
I incorporated Depth of Field by adding two parameters to the camera.
* the DOF parameter is the distance from the eye to the focal point. (it would be more aptly named focal length, note to self for future refactoring)
* the APERATURE parameter is the size of the aperature
The way the feature works is that when casting the first ray, after selecting within my jittered pixel I place the FocalPoint along this ray at DOF distance. Then I randomly jitter my pixel position within the range of the aperature (so a larger aperature = smaller depth of field = more blur.
* Adding this feature did add a few more calculations into the initial raycast which slowed performance down slightly. I was overly cautious in normalizing my direction vectors so it's possible I could remove a few of these normalizations for additional speedup. I also probably don't need to jitter within the pixel anymore, but kept it there so I still maintain AntiAliasing when aperature is 0.
* I didn't do anything particularly to speed this feature up. In RayTracing people often have to supersample the pixel to achieve this, but since we're already taking 1000s of samples per pixle I merely had to jitter around within the size of the aperature.

* raytraceKernel.cu contains the core raytracing CUDA kernel. You will need to
complete:
* cudaRaytraceCore() handles kernel launches and memory management; this
function already contains example code for launching kernels,
transferring geometry and cameras from the host to the device, and transferring
image buffers from the host to the device and back. You will have to complete
this function to support passing materials and lights to CUDA.
* raycastFromCameraKernel() is a function that you need to implement. This
function once correctly implemented should handle camera raycasting.
* raytraceRay() is the core raytracing CUDA kernel; all of your pathtracing
logic should be implemented in this CUDA kernel. raytraceRay() should
take in a camera, image buffer, geometry, materials, and lights, and should
trace a ray through the scene and write the resultant color to a pixel in the
image buffer.
## ANALYSIS REFRACTION
![FirstRefraction.bmp](https://raw.githubusercontent.com/RTCassidy1/Project3-Pathtracer/master/FirstRefraction.bmp)
My BSDF is pretty simple, I would like to go through and make it more modular at a future time when I have a chance to refactor. Instead of using Reflective and Refractive markers as flags, I used them as floats each representing the percentage of photons that hit is that will reflect or refract(transmit). If a photon doesn't reflect or refract then it is treated as diffuse.
* my BSDF accepts two random numbers from 0-1 as parameters. I use these to determine if I will check the reflectance or refractance threshold first. I then use the number to determine which way to treat the material. If my random number is less than the threshhold for reflectance (or refractance) I treat it as a reflection (refraction). If it is above the threshold it falls through to the next test. If it fails everything else it's treated as diffuse.

* intersections.h contains functions for geometry intersection testing and
point generation. You will need to complete:
* boxIntersectionTest(), which takes in a box and a ray and performs an
intersection test. This function should work in the same way as
sphereIntersectionTest().
* getRandomPointOnSphere(), which takes in a sphere and returns a random
point on the surface of the sphere with an even probability distribution.
This function should work in the same way as getRandomPointOnCube(). You can
(although do not necessarily have to) use this to generate points on a sphere
to use a point lights, or can use this for area lighting.

* interactions.h contains functions for ray-object interactions that define how
rays behave upon hitting materials and objects. You will need to complete:
* getRandomDirectionInSphere(), which generates a random direction in a
sphere with a uniform probability. This function works in a fashion
similar to that of calculateRandomDirectionInHemisphere(), which generates a
random cosine-weighted direction in a hemisphere.
* calculateBSDF(), which takes in an incoming ray, normal, material, and
other information, and returns an outgoing ray. You can either implement
this function for ray-surface interactions, or you can replace it with your own
function(s).

You will also want to familiarize yourself with:

* sceneStructs.h, which contains definitions for how geometry, materials,
lights, cameras, and animation frames are stored in the renderer.
* utilities.h, which serves as a kitchen-sink of useful functions

## NOTES ON GLM
This project uses GLM, the GL Math library, for linear algebra. You need to
know two important points on how GLM is used in this project:

* In this project, indices in GLM vectors (such as vec3, vec4), are accessed
via swizzling. So, instead of v[0], v.x is used, and instead of v[1], v.y is
used, and so on and so forth.
* GLM Matrix operations work fine on NVIDIA Fermi cards and later, but
pre-Fermi cards do not play nice with GLM matrices. As such, in this project,
GLM matrices are replaced with a custom matrix struct, called a cudaMat4, found
in cudaMat4.h. A custom function for multiplying glm::vec4s and cudaMat4s is
provided as multiplyMV() in intersections.h.
## ANALYSIS STREAM COMPACTION
I also implemented stream compaction in an effort to speed up my renders. Unfortunately it has so far offered little to no performance increase with depth 5 or 10. I think this is because my implementation is not efficient enough and has too much memory access overhead. I used thrust to scan an array for retired threads, but then implemented my own function to compact the rayStates. I think with a little more research of Thrust I can implement my rayState array as a thrust vector and use built in functions to prune it on each depth iteration.
* I also planned to look at shared memory, but haven't yet had the chance. There are a lot of parallel streams doing ray-intersection tests with the same geometry, so I speculate there could be an increase in efficiency by moving the geometery (and possibly materials) into shared memory. This will have an overhead to actually move them, and the shear number of threads may actually be hiding any latency in the accesses, but I haven't had a chance to look at it yet.

## SCENE FORMAT
This project uses a custom scene description format.
Expand All @@ -170,10 +46,8 @@ Materials are defined in the following fashion:
* RGB (float r) (float g) (float b) //diffuse color
* SPECX (float specx) //specular exponent
* SPECRGB (float r) (float g) (float b) //specular color
* REFL (bool refl) //reflectivity flag, 0 for
no, 1 for yes
* REFR (bool refr) //refractivity flag, 0 for
no, 1 for yes
* REFL (bool refl) //how reflective is it? 0 means diffuse
* REFR (float refr) //how transparent is it? 0 means opaque
* REFRIOR (float ior) //index of refraction
for Fresnel effects
* SCATTER (float scatter) //scatter flag, 0 for
Expand Down Expand Up @@ -203,6 +77,8 @@ Cameras are defined in the following fashion:
* VIEW (float x) (float y) (float z) //camera's view
direction
* UP (float x) (float y) (float z) //camera's up vector
* DOF (float x) //the Distance to the focal point
* APERATURE (float x) //the size of the aperature (should be large)

Objects are defined in the following fashion:
* OBJECT (object ID) //object header
Expand Down Expand Up @@ -234,29 +110,6 @@ SCALE .01 10 10

Check the Google group for some sample .obj files of varying complexity.

## THIRD PARTY CODE POLICY
* Use of any third-party code must be approved by asking on our Google Group.
If it is approved, all students are welcome to use it. Generally, we approve
use of third-party code that is not a core part of the project. For example,
for the ray tracer, we would approve using a third-party library for loading
models, but would not approve copying and pasting a CUDA function for doing
refraction.
* Third-party code must be credited in README.md.
* Using third-party code without its approval, including using another
student's code, is an academic integrity violation, and will result in you
receiving an F for the semester.

## SELF-GRADING
* On the submission date, email your grade, on a scale of 0 to 100, to Harmony,
[email protected], with a one paragraph explanation. Be concise and
realistic. Recall that we reserve 30 points as a sanity check to adjust your
grade. Your actual grade will be (0.7 * your grade) + (0.3 * our grade). We
hope to only use this in extreme cases when your grade does not realistically
reflect your work - it is either too high or too low. In most cases, we plan
to give you the exact grade you suggest.
* Projects are not weighted evenly, e.g., Project 0 doesn't count as much as
the path tracer. We will determine the weighting at the end of the semester
based on the size of each project.

## SUBMISSION
Please change the README to reflect the answers to the questions we have posed
Expand Down
44 changes: 33 additions & 11 deletions data/scenes/sampleScene.txt
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ MATERIAL 1 //red diffuse
RGB .63 .06 .04
SPECEX 0
SPECRGB 1 1 1
REFL 0
REFL .2
REFR 0
REFRIOR 0
SCATTER 0
Expand All @@ -35,11 +35,11 @@ RSCTCOEFF 0
EMITTANCE 0

MATERIAL 3 //red glossy
RGB .63 .06 .04
RGB .63 .16 .04
SPECEX 0
SPECRGB 1 1 1
REFL 0
REFR 0
REFL .1
REFR .9
REFRIOR 2
SCATTER 0
ABSCOEFF 0 0 0
Expand All @@ -50,7 +50,7 @@ MATERIAL 4 //white glossy
RGB 1 1 1
SPECEX 0
SPECRGB 1 1 1
REFL 0
REFL .3
REFR 0
REFRIOR 2
SCATTER 0
Expand All @@ -74,7 +74,7 @@ MATERIAL 6 //green glossy
RGB .15 .48 .09
SPECEX 0
SPECRGB 1 1 1
REFL 0
REFL .1
REFR 0
REFRIOR 2.6
SCATTER 0
Expand All @@ -95,7 +95,19 @@ RSCTCOEFF 0
EMITTANCE 1

MATERIAL 8 //light
RGB 1 1 1
RGB 1 1 .5
SPECEX 0
SPECRGB 0 0 0
REFL 0
REFR 0
REFRIOR 0
SCATTER 0
ABSCOEFF 0 0 0
RSCTCOEFF 0
EMITTANCE 5

MATERIAL 9 //light
RGB .5 1 1
SPECEX 0
SPECRGB 0 0 0
REFL 0
Expand All @@ -115,6 +127,8 @@ frame 0
EYE 0 4.5 12
VIEW 0 0 -1
UP 0 1 0
DOF 12
APERATURE 1500

OBJECT 0
cube
Expand Down Expand Up @@ -160,23 +174,23 @@ OBJECT 5
sphere
material 4
frame 0
TRANS 0 2 0
TRANS 0 5 0
ROTAT 0 180 0
SCALE 3 3 3

OBJECT 6
sphere
material 3
frame 0
TRANS 2 5 2
TRANS 2 1.5.5 -3
ROTAT 0 180 0
SCALE 2.5 2.5 2.5

OBJECT 7
sphere
material 6
frame 0
TRANS -2 5 -2
TRANS -2 1.5 -3
ROTAT 0 180 0
SCALE 3 3 3

Expand All @@ -187,4 +201,12 @@ material 8
frame 0
TRANS 0 10 0
ROTAT 0 0 90
SCALE .3 3 3
SCALE .3 6 6

OBJECT 9
cube
material 9
frame 0
TRANS 0 4.5 12
ROTAT 0 0 0
SCALE 4 4 .3
Loading