diff --git a/Instruction.md b/Instruction.md new file mode 100644 index 0000000..da4c7e1 --- /dev/null +++ b/Instruction.md @@ -0,0 +1,224 @@ +------------------------------------------------------------------------------ +CIS565: Project 6 -- Deferred Shader +------------------------------------------------------------------------------- +Fall 2014 +------------------------------------------------------------------------------- +Due Wed, 11/12/2014 at Noon +------------------------------------------------------------------------------- + +------------------------------------------------------------------------------- +NOTE: +------------------------------------------------------------------------------- +This project requires any graphics card with support for a modern OpenGL +pipeline. Any AMD, NVIDIA, or Intel card from the past few years should work +fine, and every machine in the SIG Lab and Moore 100 is capable of running +this project. + +This project also requires a WebGL capable browser. The project is known to +have issues with Chrome on windows, but Firefox seems to run it fine. + +------------------------------------------------------------------------------- +INTRODUCTION: +------------------------------------------------------------------------------- + +In this project, you will get introduced to the basics of deferred shading. You will write GLSL and OpenGL code to perform various tasks in a deferred lighting pipeline such as creating and writing to a G-Buffer. + +------------------------------------------------------------------------------- +CONTENTS: +------------------------------------------------------------------------------- +The Project5 root directory contains the following subdirectories: + +* js/ contains the javascript files, including external libraries, necessary. +* assets/ contains the textures that will be used in the second half of the + assignment. +* resources/ contains the screenshots found in this readme file. + + This Readme file edited as described above in the README section. + +------------------------------------------------------------------------------- +OVERVIEW: +------------------------------------------------------------------------------- +The deferred shader you will write will have the following stages: + +Stage 1 renders the scene geometry to the G-Buffer +* pass.vert +* pass.frag + +Stage 2 renders the lighting passes and accumulates to the P-Buffer +* quad.vert +* diffuse.frag +* diagnostic.frag + +Stage 3 renders the post processing +* post.vert +* post.frag + +The keyboard controls are as follows: +WASDRF - Movement (along w the arrow keys) +* W - Zoom in +* S - Zoom out +* A - Left +* D - Right +* R - Up +* F - Down +* ^ - Up +* v - Down +* < - Left +* > - Right +* 1 - World Space Position +* 2 - Normals +* 3 - Color +* 4 - Depth +* 0 - Full deferred pipeline + +There are also mouse controls for camera rotation. + +------------------------------------------------------------------------------- +REQUIREMENTS: +------------------------------------------------------------------------------- + +In this project, you are given code for: +* Loading .obj file +* Deferred shading pipeline +* GBuffer pass + +You are required to implement: +* Either of the following effects + * Bloom + * "Toon" Shading (with basic silhouetting) +* Screen Space Ambient Occlusion +* Diffuse and Blinn-Phong shading + +**NOTE**: Implementing separable convolution will require another link in your pipeline and will count as an extra feature if you do performance analysis with a standard one-pass 2D convolution. The overhead of rendering and reading from a texture _may_ offset the extra computations for smaller 2D kernels. + +You must implement two of the following extras: +* The effect you did not choose above +* Compare performance to a normal forward renderer with + * No optimizations + * Coarse sort geometry front-to-back for early-z + * Z-prepass for early-z +* Optimize g-buffer format, e.g., pack things together, quantize, reconstruct z from normal x and y (because it is normalized), etc. + * Must be accompanied with a performance analysis to count +* Additional lighting and pre/post processing effects! (email first please, if they are good you may add multiple). + +------------------------------------------------------------------------------- +RUNNING THE CODE: +------------------------------------------------------------------------------- + +Since the code attempts to access files that are local to your computer, you +will either need to: + +* Run your browser under modified security settings, or +* Create a simple local server that serves the files + + +FIREFOX: change ``strict_origin_policy`` to false in about:config + +CHROME: run with the following argument : `--allow-file-access-from-files` + +(You can do this on OSX by running Chrome from /Applications/Google +Chrome/Contents/MacOS with `open -a "Google Chrome" --args +--allow-file-access-from-files`) + +* To check if you have set the flag properly, you can open chrome://version and + check under the flags + +RUNNING A SIMPLE SERVER: + +If you have Python installed, you can simply run a simple HTTP server off your +machine from the root directory of this repository with the following command: + +`python -m SimpleHTTPServer` + +------------------------------------------------------------------------------- +RESOURCES: +------------------------------------------------------------------------------- + +The following are articles and resources that have been chosen to help give you +a sense of each of the effects: + +* Bloom : [GPU Gems](http://http.developer.nvidia.com/GPUGems/gpugems_ch21.html) +* Screen Space Ambient Occlusion : [Floored + Article](http://floored.com/blog/2013/ssao-screen-space-ambient-occlusion.html) + +------------------------------------------------------------------------------- +README +------------------------------------------------------------------------------- +All students must replace or augment the contents of this Readme.md in a clear +manner with the following: + +* A brief description of the project and the specific features you implemented. +* At least one screenshot of your project running. +* A 30 second or longer video of your project running. To create the video you + can use [Open Broadcaster Software](http://obsproject.com) +* A performance evaluation (described in detail below). + +------------------------------------------------------------------------------- +PERFORMANCE EVALUATION +------------------------------------------------------------------------------- +The performance evaluation is where you will investigate how to make your +program more efficient using the skills you've learned in class. You must have +performed at least one experiment on your code to investigate the positive or +negative effects on performance. + +We encourage you to get creative with your tweaks. Consider places in your code +that could be considered bottlenecks and try to improve them. + +Each student should provide no more than a one page summary of their +optimizations along with tables and or graphs to visually explain any +performance differences. + +------------------------------------------------------------------------------- +THIRD PARTY CODE POLICY +------------------------------------------------------------------------------- +* Use of any third-party code must be approved by asking on the Google groups. + If it is approved, all students are welcome to use it. Generally, we approve + use of third-party code that is not a core part of the project. For example, + for the ray tracer, we would approve using a third-party library for loading + models, but would not approve copying and pasting a CUDA function for doing + refraction. +* Third-party code must be credited in README.md. +* Using third-party code without its approval, including using another + student's code, is an academic integrity violation, and will result in you + receiving an F for the semester. + +------------------------------------------------------------------------------- +SELF-GRADING +------------------------------------------------------------------------------- +* On the submission date, email your grade, on a scale of 0 to 100, to Harmony, + harmoli+cis565@seas.upenn.edu, with a one paragraph explanation. Be concise and + realistic. Recall that we reserve 30 points as a sanity check to adjust your + grade. Your actual grade will be (0.7 * your grade) + (0.3 * our grade). We + hope to only use this in extreme cases when your grade does not realistically + reflect your work - it is either too high or too low. In most cases, we plan + to give you the exact grade you suggest. +* Projects are not weighted evenly, e.g., Project 0 doesn't count as much as + the path tracer. We will determine the weighting at the end of the semester + based on the size of each project. + + +--- +SUBMISSION +--- +As with the previous projects, you should fork this project and work inside of +your fork. Upon completion, commit your finished project back to your fork, and +make a pull request to the master repository. You should include a README.md +file in the root directory detailing the following + +* A brief description of the project and specific features you implemented +* At least one screenshot of your project running. +* A link to a video of your project running. +* Instructions for building and running your project if they differ from the + base code. +* A performance writeup as detailed above. +* A list of all third-party code used. +* This Readme file edited as described above in the README section. + +--- +ACKNOWLEDGEMENTS +--- + +Many thanks to Cheng-Tso Lin, whose framework for CIS700 we used for this +assignment. + +This project makes use of [three.js](http://www.threejs.org). diff --git a/README.md b/README.md index da4c7e1..037422d 100644 --- a/README.md +++ b/README.md @@ -1,224 +1,56 @@ ------------------------------------------------------------------------------- -CIS565: Project 6 -- Deferred Shader -------------------------------------------------------------------------------- -Fall 2014 -------------------------------------------------------------------------------- -Due Wed, 11/12/2014 at Noon -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- -NOTE: -------------------------------------------------------------------------------- -This project requires any graphics card with support for a modern OpenGL -pipeline. Any AMD, NVIDIA, or Intel card from the past few years should work -fine, and every machine in the SIG Lab and Moore 100 is capable of running -this project. - -This project also requires a WebGL capable browser. The project is known to -have issues with Chrome on windows, but Firefox seems to run it fine. - -------------------------------------------------------------------------------- -INTRODUCTION: -------------------------------------------------------------------------------- - -In this project, you will get introduced to the basics of deferred shading. You will write GLSL and OpenGL code to perform various tasks in a deferred lighting pipeline such as creating and writing to a G-Buffer. - -------------------------------------------------------------------------------- -CONTENTS: -------------------------------------------------------------------------------- -The Project5 root directory contains the following subdirectories: - -* js/ contains the javascript files, including external libraries, necessary. -* assets/ contains the textures that will be used in the second half of the - assignment. -* resources/ contains the screenshots found in this readme file. - - This Readme file edited as described above in the README section. - -------------------------------------------------------------------------------- -OVERVIEW: -------------------------------------------------------------------------------- -The deferred shader you will write will have the following stages: - -Stage 1 renders the scene geometry to the G-Buffer -* pass.vert -* pass.frag - -Stage 2 renders the lighting passes and accumulates to the P-Buffer -* quad.vert -* diffuse.frag -* diagnostic.frag - -Stage 3 renders the post processing -* post.vert -* post.frag - -The keyboard controls are as follows: -WASDRF - Movement (along w the arrow keys) -* W - Zoom in -* S - Zoom out -* A - Left -* D - Right -* R - Up -* F - Down -* ^ - Up -* v - Down -* < - Left -* > - Right -* 1 - World Space Position -* 2 - Normals -* 3 - Color -* 4 - Depth -* 0 - Full deferred pipeline - -There are also mouse controls for camera rotation. - -------------------------------------------------------------------------------- -REQUIREMENTS: -------------------------------------------------------------------------------- - -In this project, you are given code for: -* Loading .obj file -* Deferred shading pipeline -* GBuffer pass - -You are required to implement: -* Either of the following effects - * Bloom - * "Toon" Shading (with basic silhouetting) -* Screen Space Ambient Occlusion -* Diffuse and Blinn-Phong shading - -**NOTE**: Implementing separable convolution will require another link in your pipeline and will count as an extra feature if you do performance analysis with a standard one-pass 2D convolution. The overhead of rendering and reading from a texture _may_ offset the extra computations for smaller 2D kernels. - -You must implement two of the following extras: -* The effect you did not choose above -* Compare performance to a normal forward renderer with - * No optimizations - * Coarse sort geometry front-to-back for early-z - * Z-prepass for early-z -* Optimize g-buffer format, e.g., pack things together, quantize, reconstruct z from normal x and y (because it is normalized), etc. - * Must be accompanied with a performance analysis to count -* Additional lighting and pre/post processing effects! (email first please, if they are good you may add multiple). - -------------------------------------------------------------------------------- -RUNNING THE CODE: -------------------------------------------------------------------------------- - -Since the code attempts to access files that are local to your computer, you -will either need to: - -* Run your browser under modified security settings, or -* Create a simple local server that serves the files - - -FIREFOX: change ``strict_origin_policy`` to false in about:config - -CHROME: run with the following argument : `--allow-file-access-from-files` - -(You can do this on OSX by running Chrome from /Applications/Google -Chrome/Contents/MacOS with `open -a "Google Chrome" --args ---allow-file-access-from-files`) - -* To check if you have set the flag properly, you can open chrome://version and - check under the flags - -RUNNING A SIMPLE SERVER: - -If you have Python installed, you can simply run a simple HTTP server off your -machine from the root directory of this repository with the following command: - -`python -m SimpleHTTPServer` - -------------------------------------------------------------------------------- -RESOURCES: -------------------------------------------------------------------------------- - -The following are articles and resources that have been chosen to help give you -a sense of each of the effects: - -* Bloom : [GPU Gems](http://http.developer.nvidia.com/GPUGems/gpugems_ch21.html) -* Screen Space Ambient Occlusion : [Floored - Article](http://floored.com/blog/2013/ssao-screen-space-ambient-occlusion.html) - -------------------------------------------------------------------------------- -README -------------------------------------------------------------------------------- -All students must replace or augment the contents of this Readme.md in a clear -manner with the following: - -* A brief description of the project and the specific features you implemented. -* At least one screenshot of your project running. -* A 30 second or longer video of your project running. To create the video you - can use [Open Broadcaster Software](http://obsproject.com) -* A performance evaluation (described in detail below). - -------------------------------------------------------------------------------- -PERFORMANCE EVALUATION -------------------------------------------------------------------------------- -The performance evaluation is where you will investigate how to make your -program more efficient using the skills you've learned in class. You must have -performed at least one experiment on your code to investigate the positive or -negative effects on performance. - -We encourage you to get creative with your tweaks. Consider places in your code -that could be considered bottlenecks and try to improve them. - -Each student should provide no more than a one page summary of their -optimizations along with tables and or graphs to visually explain any -performance differences. - -------------------------------------------------------------------------------- -THIRD PARTY CODE POLICY -------------------------------------------------------------------------------- -* Use of any third-party code must be approved by asking on the Google groups. - If it is approved, all students are welcome to use it. Generally, we approve - use of third-party code that is not a core part of the project. For example, - for the ray tracer, we would approve using a third-party library for loading - models, but would not approve copying and pasting a CUDA function for doing - refraction. -* Third-party code must be credited in README.md. -* Using third-party code without its approval, including using another - student's code, is an academic integrity violation, and will result in you - receiving an F for the semester. - -------------------------------------------------------------------------------- -SELF-GRADING -------------------------------------------------------------------------------- -* On the submission date, email your grade, on a scale of 0 to 100, to Harmony, - harmoli+cis565@seas.upenn.edu, with a one paragraph explanation. Be concise and - realistic. Recall that we reserve 30 points as a sanity check to adjust your - grade. Your actual grade will be (0.7 * your grade) + (0.3 * our grade). We - hope to only use this in extreme cases when your grade does not realistically - reflect your work - it is either too high or too low. In most cases, we plan - to give you the exact grade you suggest. -* Projects are not weighted evenly, e.g., Project 0 doesn't count as much as - the path tracer. We will determine the weighting at the end of the semester - based on the size of each project. - - ---- -SUBMISSION ---- -As with the previous projects, you should fork this project and work inside of -your fork. Upon completion, commit your finished project back to your fork, and -make a pull request to the master repository. You should include a README.md -file in the root directory detailing the following - -* A brief description of the project and specific features you implemented -* At least one screenshot of your project running. -* A link to a video of your project running. -* Instructions for building and running your project if they differ from the - base code. -* A performance writeup as detailed above. -* A list of all third-party code used. -* This Readme file edited as described above in the README section. - ---- -ACKNOWLEDGEMENTS ---- - -Many thanks to Cheng-Tso Lin, whose framework for CIS700 we used for this -assignment. - -This project makes use of [three.js](http://www.threejs.org). +#Deferred Shader +This project is about implementing the deferred shader by using WebGL. +The features I implemented including +* Diffuse and Bling shading +* Bloom shading with separable convolution +* Toon shading +* Screen space ambient occlusion + +User could use number keys to switch between different shading effects. +* '0': Diffuse and bling shading +* '9': Toon shading +* '8': Ambient occlusion +* '7': Bloom shading with separable convolution +* '6': Bloom shading without separable convolution +* '5': Silhouette image + +#Diffuse and bling shading +![Title Image](result/diffuse_bling.jpg) + +#Toon shading +What I did in toon shading is that I tried to create the boundaries or silhouette by finding the place where the normal has enormous change. +In addition, instead of using the continuous RGB to shade the color, I divided the RGB into 5 segments to make the toon like shading. +![toon shading result](result/toon.jpg) + +#Bloom +Because the bloom shading needs the alpha value to mark the place where we want to make it glow. However, our model here lacks the alpha information. +Therefore, I use the silhouettes which I created for toon shading to be my alpha value. The belowing image shows the silhouette which I used to glow my first diffuse_bling shading. +What different between the white glow image and green glow image is that I used separable convolution to do the bloom for white glow one. +![bloom result with convolution](result/silhouette.jpg) +![bloom result with convolution](result/bloom.jpg) +![bloom result without convolution](result/bloom2.jpg) + +#Screen space ambient occlusion +I followed the tutorial of http://john-chapman-graphics.blogspot.co.uk/2013/01/ssao-tutorial.html to implemente my SSAO. +What difficult here is to find the appropriate radius to cast our samples. +![ssao result](result/ssao.jpg) + +#Depth value +![debug depth](result/depth.jpg) + +#Performance Analysis + +When I am doing the experiment, I found something interesting. Because initially I wrote the bloom shaders with and without separable convolution in the same shader file and using number keys to switch +to different code pathes by passing in the uniform integer, and the FPS of these two methods are the same. It's weird for me because the code path without using separable convolution +should have much more computation loading than the other one. After I asked Cheng-Tso Lin, he told me that what I found is possible because of the WebGL compiling issue. Anyway, in order to +finish this experiment, I eventually wrote the bloom shaders with and without separable convolution in two different shader file. +From the belowing chart we could obviously found that using separable convolution to do the bloom shading has huge benefit on performance. The benefit will become huger when the number of samples become larger. +![performance result](result/performance.jpg) +![performance result](result/performance2.jpg) +#Video +http://youtu.be/U8ZvzvczlKc + +#Reference +Bloom: http://http.developer.nvidia.com/GPUGems/gpugems_ch21.html +SSAO: http://john-chapman-graphics.blogspot.co.uk/2013/01/ssao-tutorial.html \ No newline at end of file diff --git a/assets/deferred/colorPass.frag b/assets/deferred/colorPass.frag deleted file mode 100644 index c151235..0000000 --- a/assets/deferred/colorPass.frag +++ /dev/null @@ -1,7 +0,0 @@ -precision highp float; - -uniform sampler2D u_sampler; - -void main(void){ - gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0); -} diff --git a/assets/deferred/colorPass.vert b/assets/deferred/colorPass.vert deleted file mode 100644 index 9c3901c..0000000 --- a/assets/deferred/colorPass.vert +++ /dev/null @@ -1,9 +0,0 @@ -precision highp float; - -attribute vec3 a_pos; - -uniform mat4 u_mvp; - -void main(void){ - gl_Position = u_mvp * vec4( a_pos, 1.0 ); -} diff --git a/assets/deferred/diagnostic.frag b/assets/deferred/diagnostic.frag deleted file mode 100644 index d47a5e9..0000000 --- a/assets/deferred/diagnostic.frag +++ /dev/null @@ -1,40 +0,0 @@ -precision highp float; - -#define DISPLAY_POS 1 -#define DISPLAY_NORMAL 2 -#define DISPLAY_COLOR 3 -#define DISPLAY_DEPTH 4 - -uniform sampler2D u_positionTex; -uniform sampler2D u_normalTex; -uniform sampler2D u_colorTex; -uniform sampler2D u_depthTex; - -uniform float u_zFar; -uniform float u_zNear; -uniform int u_displayType; - -varying vec2 v_texcoord; - -float linearizeDepth( float exp_depth, float near, float far ){ - return ( 2.0 * near ) / ( far + near - exp_depth * ( far - near ) ); -} - -void main() -{ - vec3 normal = texture2D( u_normalTex, v_texcoord ).xyz; - vec3 position = texture2D( u_positionTex, v_texcoord ).xyz; - vec4 color = texture2D( u_colorTex, v_texcoord ); - float depth = texture2D( u_depthTex, v_texcoord ).x; - - depth = linearizeDepth( depth, u_zNear, u_zFar ); - - if( u_displayType == DISPLAY_DEPTH ) - gl_FragColor = vec4( depth, depth, depth, 1 ); - else if( u_displayType == DISPLAY_COLOR ) - gl_FragColor = color; - else if( u_displayType == DISPLAY_NORMAL ) - gl_FragColor = vec4( normal, 1 ); - else - gl_FragColor = vec4( position, 1 ); -} diff --git a/assets/deferred/diffuse.frag b/assets/deferred/diffuse.frag deleted file mode 100644 index ef0c5fc..0000000 --- a/assets/deferred/diffuse.frag +++ /dev/null @@ -1,23 +0,0 @@ -precision highp float; - -uniform sampler2D u_positionTex; -uniform sampler2D u_normalTex; -uniform sampler2D u_colorTex; -uniform sampler2D u_depthTex; - -uniform float u_zFar; -uniform float u_zNear; -uniform int u_displayType; - -varying vec2 v_texcoord; - -float linearizeDepth( float exp_depth, float near, float far ){ - return ( 2.0 * near ) / ( far + near - exp_depth * ( far - near ) ); -} - -void main() -{ - // Write a diffuse shader and a Blinn-Phong shader - // NOTE : You may need to add your own normals to fulfill the second's requirements - gl_FragColor = vec4(texture2D(u_colorTex, v_texcoord).rgb, 1.0); -} diff --git a/assets/deferred/normPass.frag b/assets/deferred/normPass.frag deleted file mode 100644 index b41d6ed..0000000 --- a/assets/deferred/normPass.frag +++ /dev/null @@ -1,7 +0,0 @@ -precision highp float; - -varying vec3 v_normal; - -void main(void){ - gl_FragColor = vec4(v_normal, 1.0); -} diff --git a/assets/deferred/normPass.vert b/assets/deferred/normPass.vert deleted file mode 100644 index 9a0b4b4..0000000 --- a/assets/deferred/normPass.vert +++ /dev/null @@ -1,15 +0,0 @@ -precision highp float; - -attribute vec3 a_pos; -attribute vec3 a_normal; - -uniform mat4 u_mvp; -uniform mat4 u_normalMat; - -varying vec3 v_normal; - -void main(void){ - gl_Position = u_mvp * vec4( a_pos, 1.0 ); - - v_normal = vec3( u_normalMat * vec4(a_normal, 0.0) ); -} diff --git a/assets/deferred/pass.frag b/assets/deferred/pass.frag deleted file mode 100644 index 2416199..0000000 --- a/assets/deferred/pass.frag +++ /dev/null @@ -1,16 +0,0 @@ -#extension GL_EXT_draw_buffers: require -precision highp float; - -uniform sampler2D u_sampler; - -varying vec4 v_pos; -varying vec3 v_normal; -varying vec2 v_texcoord; -varying float v_depth; - -void main(void){ - gl_FragData[0] = v_pos; - gl_FragData[1] = vec4( normalize(v_normal), 1.0 ); - gl_FragData[2] = vec4( 1.0, 0.0, 0.0, 1.0 ); - gl_FragData[3] = vec4( v_depth, 0, 0, 0 ); -} diff --git a/assets/deferred/pass.vert b/assets/deferred/pass.vert deleted file mode 100644 index 861cb1a..0000000 --- a/assets/deferred/pass.vert +++ /dev/null @@ -1,26 +0,0 @@ -precision highp float; - -attribute vec3 a_pos; -attribute vec3 a_normal; -attribute vec2 a_texcoord; - -uniform mat4 u_projection; -uniform mat4 u_modelview; -uniform mat4 u_mvp; -uniform mat4 u_normalMat; - -varying vec4 v_pos; -varying vec3 v_normal; -varying vec2 v_texcoord; -varying float v_depth; - -void main(void){ - gl_Position = u_mvp * vec4( a_pos, 1.0 ); - - v_pos = u_modelview * vec4( a_pos, 1.0 ); - v_normal = vec3( u_normalMat * vec4(a_normal,0.0) ); - - v_texcoord = a_texcoord; - - v_depth = ( gl_Position.z / gl_Position.w + 1.0 ) / 2.0; -} diff --git a/assets/deferred/posPass.frag b/assets/deferred/posPass.frag deleted file mode 100644 index 645a521..0000000 --- a/assets/deferred/posPass.frag +++ /dev/null @@ -1,8 +0,0 @@ -precision highp float; - -varying vec4 v_pos; -varying float v_depth; - -void main(void){ - gl_FragColor = v_pos; -} diff --git a/assets/deferred/posPass.vert b/assets/deferred/posPass.vert deleted file mode 100644 index ece8cc4..0000000 --- a/assets/deferred/posPass.vert +++ /dev/null @@ -1,15 +0,0 @@ -precision highp float; - -attribute vec3 a_pos; - -uniform mat4 u_modelview; -uniform mat4 u_mvp; - -varying vec4 v_pos; -varying float v_depth; - -void main(void){ - gl_Position = u_mvp * vec4( a_pos, 1.0 ); - v_pos = u_modelview * vec4( a_pos, 1.0 ); - v_depth = ( gl_Position.z / gl_Position.w + 1.0 ) / 2.0; -} diff --git a/assets/deferred/post.frag b/assets/deferred/post.frag deleted file mode 100644 index 52edda2..0000000 --- a/assets/deferred/post.frag +++ /dev/null @@ -1,17 +0,0 @@ -precision highp float; - -uniform sampler2D u_shadeTex; - -varying vec2 v_texcoord; - -float linearizeDepth( float exp_depth, float near, float far ){ - return ( 2.0 * near ) / ( far + near - exp_depth * ( far - near ) ); -} - -void main() -{ - // Currently acts as a pass filter that immmediately renders the shaded texture - // Fill in post-processing as necessary HERE - // NOTE : You may choose to use a key-controlled switch system to display one feature at a time - gl_FragColor = vec4(texture2D( u_shadeTex, v_texcoord).rgb, 1.0); -} diff --git a/assets/deferred/quad.vert b/assets/deferred/quad.vert deleted file mode 100644 index 8e4662e..0000000 --- a/assets/deferred/quad.vert +++ /dev/null @@ -1,11 +0,0 @@ -precision highp float; - -attribute vec3 a_pos; -attribute vec2 a_texcoord; - -varying vec2 v_texcoord; - -void main(void){ - v_texcoord = a_texcoord; - gl_Position = vec4( a_pos, 1.0 ); -} diff --git a/assets/shader/deferred/bloomConvolutionStep1.frag b/assets/shader/deferred/bloomConvolutionStep1.frag new file mode 100644 index 0000000..b121eb1 --- /dev/null +++ b/assets/shader/deferred/bloomConvolutionStep1.frag @@ -0,0 +1,33 @@ +precision highp float; + + +uniform sampler2D u_postTex; +uniform sampler2D u_shadeTex; +varying vec2 v_texcoord; +uniform int u_displayType; + +void main() +{ + vec3 post = texture2D( u_postTex, v_texcoord).rgb; + + + + float value = 0.0; + + /*for(int i = 0; i < 31; ++i){ + vec3 bloomExam = texture2D( u_postTex, v_texcoord + vec2(float(i - 15)/960.0, 0.0)).rgb; + if(bloomExam.x != 0.0){ + value += bloomExam.x * (16.0 - abs(15.0 - float(i))); + } + } + gl_FragColor = vec4(vec3(float(value) / 256.0, float(value) / 256.0, float(value) / 256.0), 1.0);*/ + + for(int i = 0; i < 11; ++i){ + vec3 bloomExam = texture2D( u_postTex, v_texcoord + vec2(float(i - 5)/960.0, 0.0)).rgb; + if(bloomExam.x != 0.0){ + value += bloomExam.x * (6.0 - abs(5.0 - float(i))); + } + } + gl_FragColor = vec4(vec3(float(value) / 36.0, float(value) / 36.0, float(value) / 36.0), 1.0); + +} diff --git a/assets/shader/deferred/bloomConvolutionStep2.frag b/assets/shader/deferred/bloomConvolutionStep2.frag new file mode 100644 index 0000000..1cd46bf --- /dev/null +++ b/assets/shader/deferred/bloomConvolutionStep2.frag @@ -0,0 +1,32 @@ +precision highp float; + +uniform sampler2D u_colorTex; +uniform sampler2D u_shadeTex; +varying vec2 v_texcoord; +uniform int u_displayType; + +void main() +{ + vec3 color = texture2D( u_colorTex, v_texcoord).rgb; + vec3 shade = texture2D( u_shadeTex, v_texcoord).rgb; + + + float value = 0.0; + + + /*for(int i = 0; i < 31; ++i){ + vec3 bloomExam = texture2D( u_colorTex, v_texcoord + vec2(0.0, float(i - 15)/540.0)).rgb; + if(bloomExam.x != 0.0){ + value += bloomExam.x * (16.0 - abs(15.0 - float(i))); + } + } + gl_FragColor = vec4(shade + vec3(float(value) / 256.0, float(value) / 256.0, float(value) / 256.0), 1.0);*/ + + for(int i = 0; i < 11; ++i){ + vec3 bloomExam = texture2D( u_colorTex, v_texcoord + vec2(0.0, float(i - 5)/540.0)).rgb; + if(bloomExam.x != 0.0){ + value += bloomExam.x * (6.0 - abs(5.0 - float(i))); + } + } + gl_FragColor = vec4(shade + vec3(float(value) / 36.0, float(value) / 36.0, float(value) / 36.0), 1.0); +} diff --git a/assets/shader/deferred/bloomOnePass.frag b/assets/shader/deferred/bloomOnePass.frag new file mode 100644 index 0000000..b415a27 --- /dev/null +++ b/assets/shader/deferred/bloomOnePass.frag @@ -0,0 +1,33 @@ +precision highp float; + +uniform sampler2D u_postTex; +uniform sampler2D u_shadeTex; +varying vec2 v_texcoord; +uniform int u_displayType; + +void main() +{ + float value = 0.0; + + /*for(int i = 0; i < 31; ++i){ + for(int j = 0; j < 31; ++j){ + vec3 bloomExam = texture2D( u_postTex, v_texcoord + vec2(float(i - 15)/960.0, float(j - 15)/540.0)).rgb; + if(bloomExam.x != 0.0){ + value += bloomExam.x * (16.0 - abs(15.0 - float(i))) * (16.0 - abs(15.0 - float(j))); + } + } + } + vec3 shade = texture2D( u_shadeTex, v_texcoord).rgb; + gl_FragColor = vec4(shade + vec3(0.0, float(value) / 65536.0, 0.0), 1.0);*/ + + for(int i = 0; i < 11; ++i){ + for(int j = 0; j < 11; ++j){ + vec3 bloomExam = texture2D( u_postTex, v_texcoord + vec2(float(i - 5)/960.0, float(j - 5)/540.0)).rgb; + if(bloomExam.x != 0.0){ + value += bloomExam.x * (6.0 - abs(5.0 - float(i))) * (6.0 - abs(5.0 - float(j))); + } + } + } + vec3 shade = texture2D( u_shadeTex, v_texcoord).rgb; + gl_FragColor = vec4(shade + vec3(0.0, float(value) / 1296.0, 0.0), 1.0); +} diff --git a/assets/shader/deferred/colorPass.frag b/assets/shader/deferred/colorPass.frag index c151235..22dcb32 100644 --- a/assets/shader/deferred/colorPass.frag +++ b/assets/shader/deferred/colorPass.frag @@ -3,5 +3,5 @@ precision highp float; uniform sampler2D u_sampler; void main(void){ - gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0); + gl_FragColor = vec4(1.0, 1.0, 0.0, 1.0); } diff --git a/assets/shader/deferred/diagnostic.frag b/assets/shader/deferred/diagnostic.frag index d47a5e9..bd3d399 100644 --- a/assets/shader/deferred/diagnostic.frag +++ b/assets/shader/deferred/diagnostic.frag @@ -26,15 +26,15 @@ void main() vec3 position = texture2D( u_positionTex, v_texcoord ).xyz; vec4 color = texture2D( u_colorTex, v_texcoord ); float depth = texture2D( u_depthTex, v_texcoord ).x; - depth = linearizeDepth( depth, u_zNear, u_zFar ); - if( u_displayType == DISPLAY_DEPTH ) + if( u_displayType == DISPLAY_DEPTH ) gl_FragColor = vec4( depth, depth, depth, 1 ); else if( u_displayType == DISPLAY_COLOR ) gl_FragColor = color; else if( u_displayType == DISPLAY_NORMAL ) gl_FragColor = vec4( normal, 1 ); else - gl_FragColor = vec4( position, 1 ); + //gl_FragColor = vec4(abs(position.x), abs(position.y), position.z, 1 ); + gl_FragColor = vec4(position.x, position.y, position.z, 1 ); } diff --git a/assets/shader/deferred/diffuse.frag b/assets/shader/deferred/diffuse.frag index ef0c5fc..7c2b4ac 100644 --- a/assets/shader/deferred/diffuse.frag +++ b/assets/shader/deferred/diffuse.frag @@ -4,6 +4,7 @@ uniform sampler2D u_positionTex; uniform sampler2D u_normalTex; uniform sampler2D u_colorTex; uniform sampler2D u_depthTex; +uniform mat4 u_modelview; uniform float u_zFar; uniform float u_zNear; @@ -17,7 +18,36 @@ float linearizeDepth( float exp_depth, float near, float far ){ void main() { - // Write a diffuse shader and a Blinn-Phong shader - // NOTE : You may need to add your own normals to fulfill the second's requirements - gl_FragColor = vec4(texture2D(u_colorTex, v_texcoord).rgb, 1.0); + // Write a diffuse shader and a Blinn-Phong shader + // NOTE : You may need to add your own normals to fulfill the second's requirements + + vec3 color = texture2D( u_colorTex, v_texcoord ).xyz; + vec3 normal = texture2D( u_normalTex, v_texcoord ).xyz; + vec3 position = texture2D(u_positionTex, v_texcoord).rgb; + float depth = texture2D(u_depthTex, v_texcoord).r; + depth = linearizeDepth( depth, u_zNear, u_zFar ); + + + vec3 lightColor = vec3(1.0, 1.0, 1.0); + vec3 lightPos = vec3(0.0, 0.0, 10.0); + vec3 lightDir = normalize((u_modelview * vec4(lightPos,1.0)).xyz - position); + + vec3 toReflectedLight = reflect(-lightDir, normal); + vec3 eyeToPosition = normalize(position); + + float diffuse = clamp(dot(lightDir, normal),0.0,1.0); + float specular = max(dot(toReflectedLight, -eyeToPosition), 0.0); + specular = pow(specular, 5.0); + + + vec3 finalColor = 0.5 * diffuse *texture2D(u_colorTex, v_texcoord).rgb + 0.5 * specular * lightColor; + + ////////////////////////////////////////////////////////////////////// + if(color.x == 1.0) + gl_FragColor = vec4(finalColor, 1.0); + else + gl_FragColor = vec4(1.0, 1.0, 1.0, 1.0); + + + } diff --git a/assets/shader/deferred/post.frag b/assets/shader/deferred/post.frag index 52edda2..457a3fe 100644 --- a/assets/shader/deferred/post.frag +++ b/assets/shader/deferred/post.frag @@ -1,6 +1,16 @@ precision highp float; +uniform sampler2D u_positionTex; +uniform sampler2D u_normalTex; +uniform sampler2D u_colorTex; +uniform sampler2D u_depthTex; +uniform mat4 u_modelview; uniform sampler2D u_shadeTex; +uniform int u_displayType; + +uniform float u_zFar; +uniform float u_zNear; +uniform float u_time; varying vec2 v_texcoord; @@ -8,10 +18,145 @@ float linearizeDepth( float exp_depth, float near, float far ){ return ( 2.0 * near ) / ( far + near - exp_depth * ( far - near ) ); } +//Generate -1~1 +float hash( float n ){ //Borrowed from voltage + return fract(sin(n)*43758.5453); +} + +bool isSilhouet(vec3 normal, float threshold){ + vec2 v_TexcoordOffsetRight = v_texcoord + vec2(2.0/960.0, 0.0); + vec3 normalOffestRight = texture2D( u_normalTex, v_TexcoordOffsetRight).rgb; + float angleWithRight = dot(normal, normalOffestRight); + + vec2 v_TexcoordOffsetUp = v_texcoord + vec2(0.0, 2.0/540.0); + vec3 normalOffestUp = texture2D( u_normalTex, v_TexcoordOffsetUp).rgb; + float angleWithUp = dot(normal, normalOffestUp); + + vec2 v_TexcoordOffsetLeft = v_texcoord - vec2(2.0/960.0, 0.0); + vec3 normalOffestLeft = texture2D( u_normalTex, v_TexcoordOffsetLeft).rgb; + float angleWithLeft = dot(normal, normalOffestLeft); + + vec2 v_TexcoordOffsetDown = v_texcoord - vec2(0.0, 2.0/540.0); + vec3 normalOffestDown = texture2D( u_normalTex, v_TexcoordOffsetDown).rgb; + float angleWithDown = dot(normal, normalOffestDown); + + if(angleWithRight < threshold || angleWithUp < threshold || angleWithLeft < threshold || angleWithDown < threshold) + return true; + else + return false; +} + void main() { - // Currently acts as a pass filter that immmediately renders the shaded texture - // Fill in post-processing as necessary HERE - // NOTE : You may choose to use a key-controlled switch system to display one feature at a time - gl_FragColor = vec4(texture2D( u_shadeTex, v_texcoord).rgb, 1.0); + vec3 shade = texture2D( u_shadeTex, v_texcoord).rgb; + vec3 normal = texture2D( u_normalTex, v_texcoord).rgb; + vec3 color = texture2D( u_colorTex, v_texcoord).rgb; + vec3 position = texture2D( u_positionTex, v_texcoord).rgb; + float depth = texture2D(u_depthTex, v_texcoord).r; + depth = linearizeDepth( depth, u_zNear, u_zFar ); + + + + float threshold = 0.5; + + + + if (u_displayType == 0){ + gl_FragColor = vec4(shade, 1.0); + } + else if(u_displayType == 9){//Toon shading + if(color.x == 1.0){ + + if(isSilhouet(normal, threshold)) + gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0); + else{ + float seg = 0.2; + float toonShadingR = seg * float(int(shade.r / seg)); + float toonShadingG = seg * float(int(shade.g / seg)); + float toonShadingB = seg * float(int(shade.b / seg)); + vec3 toonShading = vec3(toonShadingR, toonShadingG, toonShadingB); + gl_FragColor = vec4(toonShading, 1.0); + } + } + else{ + gl_FragColor = vec4(shade, 1.0); + } + } + else if(u_displayType == 8){ + if(color.x == 1.0){ + float radius = 0.01; + float kernelSize = 100.0; + float occlusion = 0.0; + + vec3 origin = vec3(position.x, position.y, depth); + + + for(int i = 0; i < 100; ++i){ + vec3 randVector = vec3(hash(position.x * 0.01 + float(i)*0.1357), + hash(position.y * 0.01 + float(i)*0.2468), + (hash(position.z * 0.01 + float(i)*0.1479)+1.0) / 2.0); + //vec3 randVector = vec3(0.0, 0.0, 1.0); + + randVector = normalize(randVector); + + float scale = float(i) / kernelSize; + scale = mix(0.1, 1.0, scale * scale); + randVector = randVector * scale ; + + + vec3 directionNotNormal; + if (abs(normal.x) < 0.57735) { + directionNotNormal = vec3(1, 0, 0); + } else if (abs(normal.y) < 0.57735) { + directionNotNormal = vec3(0, 1, 0); + } else { + directionNotNormal = vec3(0, 0, 1); + } + + /*vec3 perpendicularDirection1 = normalize(cross(normal, directionNotNormal)); + vec3 perpendicularDirection2 = normalize(cross(normal, perpendicularDirection1)); + vec3 temp =( randVector.z * normal ) + ( randVector.x * perpendicularDirection1 ) + ( randVector.y * perpendicularDirection2 ); + vec3 sampleVector = normalize(temp);*/ + + vec3 rvec = normalize(vec3(0.0, //hash(position.x * 0.01 * u_time + float(i)*0.1234) + hash(position.y * 0.01 + float(i)*0.5678), + hash(position.z * 0.01 + float(i)*0.1357))); + //vec3 rvec = directionNotNormal; + + + vec3 tangent = normalize(rvec - normal * dot(rvec, normal)); + vec3 bitangent = cross(normal, tangent); + mat3 tbn = mat3(tangent, bitangent, normal); + + vec3 sampleVector = tbn * randVector; + + vec3 sample = origin + vec3((sampleVector * radius).x, (sampleVector * radius).y, -(sampleVector * radius).z / 2.0); + + + float sampleDepth = texture2D(u_depthTex, v_texcoord + (sampleVector * radius).xy ).r; + sampleDepth = linearizeDepth( sampleDepth, u_zNear, u_zFar ); + + if(sampleDepth <= sample.z) + occlusion += 1.0; + + } + + gl_FragColor = vec4(1.0 - occlusion/kernelSize, 1.0 - occlusion/kernelSize, 1.0 - occlusion/kernelSize, 1.0); + + + + } + } + else if(u_displayType == 7 || u_displayType == 6|| u_displayType == 5){ + if(color.x == 1.0){ + if(isSilhouet(normal, threshold)) + gl_FragColor = vec4(1.0, 1.0, 1.0, 1.0); + else + gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0); + } + else + gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0); + } } + + diff --git a/index.html b/index.html index dd0ffef..8b710ca 100644 --- a/index.html +++ b/index.html @@ -25,7 +25,7 @@ - + diff --git a/js/core/cameraInteractor.js b/js/core/cameraInteractor.js index e20292d..9217c84 100644 --- a/js/core/cameraInteractor.js +++ b/js/core/cameraInteractor.js @@ -67,34 +67,34 @@ CIS565WEBGLCORE.CameraInteractor = function(camera,canvas){ ctrl = ev.ctrlKey; if (!ctrl){ - if (key == 38){ + if (key == 38){//Up camera.changeElevation(10); } - else if (key == 40){ + else if (key == 40){//Down camera.changeElevation(-10); } - else if (key == 37){ + else if (key == 37){//Left camera.changeAzimuth(-10); } - else if (key == 39){ + else if (key == 39){//Right camera.changeAzimuth(10); } - else if( key == 87 ){ + else if( key == 87 ){//Key W camera.moveForward(); } - else if( key == 65){ + else if( key == 65){//Key A camera.moveLeft(); } - else if( key == 83 ){ + else if( key == 83 ){//Key S camera.moveBackward(); } - else if( key == 68 ){ + else if( key == 68 ){//Key D camera.moveRight(); } - else if( key == 82 ){ + else if( key == 82 ){//Key R camera.moveUp(); } - else if( key == 70 ){ + else if( key == 70 ){//Key F camera.moveDown(); } diff --git a/js/core/fbo-util.js b/js/core/fbo-util.js index 42abe4c..fa260ca 100644 --- a/js/core/fbo-util.js +++ b/js/core/fbo-util.js @@ -10,6 +10,7 @@ var FBO_GBUFFER_NORMAL = 1; var FBO_GBUFFER_COLOR = 2; var FBO_GBUFFER_DEPTH = 3; var FBO_GBUFFER_TEXCOORD = 4; +var FBO_GBUFFER_POST = 5; CIS565WEBGLCORE.createFBO = function(){ "use strict" @@ -46,7 +47,7 @@ CIS565WEBGLCORE.createFBO = function(){ gl.texImage2D(gl.TEXTURE_2D, 0, gl.DEPTH_COMPONENT, width, height, 0, gl.DEPTH_COMPONENT, gl.UNSIGNED_SHORT, null); // Create textures for FBO attachment - for( var i = 0; i < 5; ++i ){ + for( var i = 0; i < 6; ++i ){ textures[i] = gl.createTexture() gl.bindTexture( gl.TEXTURE_2D, textures[i] ); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.LINEAR); @@ -67,6 +68,7 @@ CIS565WEBGLCORE.createFBO = function(){ drawbuffers[1] = extDrawBuffers.COLOR_ATTACHMENT1_WEBGL; drawbuffers[2] = extDrawBuffers.COLOR_ATTACHMENT2_WEBGL; drawbuffers[3] = extDrawBuffers.COLOR_ATTACHMENT3_WEBGL; + //drawbuffers[4] = extDrawBuffers.COLOR_ATTACHMENT4_WEBGL; extDrawBuffers.drawBuffersWEBGL( drawbuffers ); //Attach textures to FBO @@ -75,6 +77,7 @@ CIS565WEBGLCORE.createFBO = function(){ gl.framebufferTexture2D( gl.FRAMEBUFFER, drawbuffers[1], gl.TEXTURE_2D, textures[1], 0 ); gl.framebufferTexture2D( gl.FRAMEBUFFER, drawbuffers[2], gl.TEXTURE_2D, textures[2], 0 ); gl.framebufferTexture2D( gl.FRAMEBUFFER, drawbuffers[3], gl.TEXTURE_2D, textures[3], 0 ); + //gl.framebufferTexture2D( gl.FRAMEBUFFER, drawbuffers[4], gl.TEXTURE_2D, textures[4], 0 ); var FBOstatus = gl.checkFramebufferStatus(gl.FRAMEBUFFER); if( FBOstatus !== gl.FRAMEBUFFER_COMPLETE ){ @@ -88,19 +91,20 @@ CIS565WEBGLCORE.createFBO = function(){ // Attach textures gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, textures[4], 0); - + //gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, textures[4], 0); FBOstatus = gl.checkFramebufferStatus(gl.FRAMEBUFFER); if(FBOstatus !== gl.FRAMEBUFFER_COMPLETE) { console.log("PBuffer FBO incomplete! Initialization failed!"); return false; } - } else { + } + else {//*****************************Laptop without drawbuffers****************************** fbo[FBO_GBUFFER_POSITION] = gl.createFramebuffer(); // Set up GBuffer Position gl.bindFramebuffer(gl.FRAMEBUFFER, fbo[FBO_GBUFFER_POSITION]); gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, gl.TEXTURE_2D, depthTex, 0); - gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, textures[0], 0); + gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, textures[0], 0);//textures[0] 指定給position? var FBOstatus = gl.checkFramebufferStatus(gl.FRAMEBUFFER); if (FBOstatus !== gl.FRAMEBUFFER_COMPLETE) { @@ -121,10 +125,26 @@ CIS565WEBGLCORE.createFBO = function(){ } gl.bindFramebuffer(gl.FRAMEBUFFER, null); + + /////////////////////////////////////////////////////// + fbo[FBO_GBUFFER_POST] = gl.createFramebuffer(); + gl.bindFramebuffer(gl.FRAMEBUFFER, fbo[FBO_GBUFFER_POST]); + gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, textures[5], 0); + + FBOstatus = gl.checkFramebufferStatus(gl.FRAMEBUFFER); + if (FBOstatus !== gl.FRAMEBUFFER_COMPLETE) { + console.log("PBuffer FBO incomplete! Init failed!"); + return false; + } + + gl.bindFramebuffer(gl.FRAMEBUFFER, null); + /////////////////////////////////////////////////////// + // Set up GBuffer Normal fbo[FBO_GBUFFER_NORMAL] = gl.createFramebuffer(); gl.bindFramebuffer(gl.FRAMEBUFFER, fbo[FBO_GBUFFER_NORMAL]); + gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, gl.TEXTURE_2D, depthTex, 0); gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, textures[1], 0); FBOstatus = gl.checkFramebufferStatus(gl.FRAMEBUFFER); @@ -138,6 +158,7 @@ CIS565WEBGLCORE.createFBO = function(){ // Set up GBuffer Color fbo[FBO_GBUFFER_COLOR] = gl.createFramebuffer(); gl.bindFramebuffer(gl.FRAMEBUFFER, fbo[FBO_GBUFFER_COLOR]); + gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, gl.TEXTURE_2D, depthTex, 0); gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, textures[2], 0); FBOstatus = gl.checkFramebufferStatus(gl.FRAMEBUFFER); diff --git a/js/main.js b/js/main.js index 4140ae1..3f7488c 100644 --- a/js/main.js +++ b/js/main.js @@ -15,12 +15,16 @@ var quad = {}; // Empty object for full-screen quad // Framebuffer var fbo = null; +var fbo2 = null; // Shader programs var passProg; // Shader program for G-Buffer var shadeProg; // Shader program for P-Buffer var diagProg; // Shader program from diagnostic var postProg; // Shader for post-process effects +var bloomConvolutionStep1Prog; +var bloomConvolutionStep2Prog; +var bloomOnePassProg; // Multi-Pass programs var posProg; @@ -31,10 +35,12 @@ var isDiagnostic = true; var zNear = 20; var zFar = 2000; var texToDisplay = 1; +var stats; +var time = 1; var main = function (canvasId, messageId) { var canvas; - + // Initialize WebGL initGL(canvasId, messageId); @@ -49,7 +55,8 @@ var main = function (canvasId, messageId) { // Set up shaders initShaders(); - + + stats = initStats(); // Register our render callbacks CIS565WEBGLCORE.render = render; CIS565WEBGLCORE.renderLoop = renderLoop; @@ -64,6 +71,9 @@ var renderLoop = function () { }; var render = function () { + if(stats != undefined) + stats.update(); + time += 1.0; if (fbo.isMultipleTargets()) { renderPass(); } else { @@ -73,6 +83,13 @@ var render = function () { if (!isDiagnostic) { renderShade(); renderPost(); + if(texToDisplay == 7){//Bloom with/without convolution + renderBloomOnePassConvolutionStep1(); + renderBloomOnePassConvolutionStep2(); + } + else if(texToDisplay == 6){ + renderBloomOnePass(); + } } else { renderDiagnostic(); } @@ -154,6 +171,7 @@ var renderPass = function () { }; var renderMulti = function () { + fbo.bind(gl, FBO_GBUFFER_POSITION); gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT); @@ -170,12 +188,12 @@ var renderMulti = function () { drawModel(posProg, 1); - gl.disable(gl.DEPTH_TEST); + //gl.disable(gl.DEPTH_TEST); fbo.unbind(gl); gl.useProgram(null); fbo.bind(gl, FBO_GBUFFER_NORMAL); - gl.clear(gl.COLOR_BUFFER_BIT); + gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT); gl.useProgram(normProg.ref()); @@ -193,11 +211,12 @@ var renderMulti = function () { fbo.unbind(gl); fbo.bind(gl, FBO_GBUFFER_COLOR); - gl.clear(gl.COLOR_BUFFER_BIT); + gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT); gl.useProgram(colorProg.ref()); gl.uniformMatrix4fv(colorProg.uMVPLoc, false, mvpMat); + drawModel(colorProg, 1); @@ -215,75 +234,178 @@ var renderShade = function () { gl.clear(gl.COLOR_BUFFER_BIT); // Bind necessary textures - //gl.activeTexture( gl.TEXTURE0 ); //position - //gl.bindTexture( gl.TEXTURE_2D, fbo.texture(0) ); - //gl.uniform1i( shadeProg.uPosSamplerLoc, 0 ); + gl.activeTexture( gl.TEXTURE0 ); //position + gl.bindTexture( gl.TEXTURE_2D, fbo.texture(0) ); + gl.uniform1i( shadeProg.uPosSamplerLoc, 0 ); - //gl.activeTexture( gl.TEXTURE1 ); //normal - //gl.bindTexture( gl.TEXTURE_2D, fbo.texture(1) ); - //gl.uniform1i( shadeProg.uNormalSamplerLoc, 1 ); + gl.activeTexture( gl.TEXTURE1 ); //normal + gl.bindTexture( gl.TEXTURE_2D, fbo.texture(1) ); + gl.uniform1i( shadeProg.uNormalSamplerLoc, 1 ); gl.activeTexture( gl.TEXTURE2 ); //color gl.bindTexture( gl.TEXTURE_2D, fbo.texture(2) ); gl.uniform1i( shadeProg.uColorSamplerLoc, 2 ); - //gl.activeTexture( gl.TEXTURE3 ); //depth - //gl.bindTexture( gl.TEXTURE_2D, fbo.depthTexture() ); - //gl.uniform1i( shadeProg.uDepthSamplerLoc, 3 ); + gl.activeTexture( gl.TEXTURE3 ); //depth + gl.bindTexture( gl.TEXTURE_2D, fbo.depthTexture() ); + gl.uniform1i( shadeProg.uDepthSamplerLoc, 3 ); - // Bind necessary uniforms - //gl.uniform1f( shadeProg.uZNearLoc, zNear ); - //gl.uniform1f( shadeProg.uZFarLoc, zFar ); + //modelview + gl.uniformMatrix4fv(shadeProg.uModelViewLoc, false, camera.getViewTransform()); + // Bind necessary uniforms + gl.uniform1f( shadeProg.uZNearLoc, zNear ); + gl.uniform1f( shadeProg.uZFarLoc, zFar ); + + drawQuad(shadeProg); // Unbind FBO fbo.unbind(gl); }; -var renderDiagnostic = function () { - gl.useProgram(diagProg.ref()); +var renderPost = function () { + gl.useProgram(postProg.ref()); + gl.disable(gl.DEPTH_TEST); + + // Bind FBO + if(texToDisplay == 7 || texToDisplay == 6) + fbo2.bind(gl, FBO_PBUFFER); + + gl.clear(gl.COLOR_BUFFER_BIT); + + gl.activeTexture( gl.TEXTURE0 ); //position + gl.bindTexture( gl.TEXTURE_2D, fbo.texture(0) ); + gl.uniform1i( postProg.uPosSamplerLoc, 0 ); + + gl.activeTexture( gl.TEXTURE1 ); //normal + gl.bindTexture( gl.TEXTURE_2D, fbo.texture(1) ); + gl.uniform1i( postProg.uNormalSamplerLoc, 1 ); + + // Bind necessary textures + gl.activeTexture( gl.TEXTURE2 ); //color + gl.bindTexture( gl.TEXTURE_2D, fbo.texture(2) ); + gl.uniform1i( postProg.uColorSamplerLoc, 2 ); + + gl.activeTexture( gl.TEXTURE3 ); //depth + gl.bindTexture( gl.TEXTURE_2D, fbo.depthTexture() ); + gl.uniform1i( postProg.uDepthSamplerLoc, 3 ); + + + gl.activeTexture( gl.TEXTURE4 ); + gl.bindTexture( gl.TEXTURE_2D, fbo.texture(4) ); + gl.uniform1i(postProg.uShadeSamplerLoc, 4 ); + gl.uniform1f(postProg.uTimeLoc, time); + + //modelview + gl.uniformMatrix4fv(postProg.uModelViewLoc, false, camera.getViewTransform()); + + gl.uniform1i( postProg.uDisplayTypeLoc, texToDisplay ); + gl.uniform1f( postProg.uZNearLoc, zNear ); + gl.uniform1f( postProg.uZFarLoc, zFar ); + + drawQuad(postProg); + // Unbind FBO + if(texToDisplay == 7 || texToDisplay == 6) + fbo2.unbind(gl); +}; - gl.disable(gl.DEPTH_TEST); - gl.clear(gl.COLOR_BUFFER_BIT); +var renderBloomOnePassConvolutionStep1 = function () { + gl.useProgram(bloomConvolutionStep1Prog.ref()); + + gl.disable(gl.DEPTH_TEST); + if(texToDisplay == 7) + fbo2.bind(gl, FBO_GBUFFER_COLOR); + + gl.clear(gl.COLOR_BUFFER_BIT); + + + gl.activeTexture( gl.TEXTURE1 ); + gl.bindTexture( gl.TEXTURE_2D, fbo.texture(4) ); + gl.uniform1i(bloomConvolutionStep1Prog.uShadeSamplerLoc, 1 ); + + gl.activeTexture( gl.TEXTURE2 ); + gl.bindTexture( gl.TEXTURE_2D, fbo2.texture(4) ); + gl.uniform1i(bloomConvolutionStep1Prog.uPostSamplerLoc, 2 ); + + gl.uniform1i(bloomConvolutionStep1Prog.uDisplayTypeLoc, texToDisplay ); + + drawQuad(bloomConvolutionStep1Prog); + + // Unbind FBO + if(texToDisplay == 7) + fbo2.unbind(gl); +}; - // Bind necessary textures - gl.activeTexture( gl.TEXTURE0 ); //position - gl.bindTexture( gl.TEXTURE_2D, fbo.texture(0) ); - gl.uniform1i( diagProg.uPosSamplerLoc, 0 ); +var renderBloomOnePassConvolutionStep2 = function () { + gl.useProgram(bloomConvolutionStep2Prog.ref()); + + gl.disable(gl.DEPTH_TEST); + gl.clear(gl.COLOR_BUFFER_BIT); + + gl.activeTexture( gl.TEXTURE1 ); + gl.bindTexture( gl.TEXTURE_2D, fbo2.texture(2) ); + gl.uniform1i(bloomConvolutionStep2Prog.uColorSamplerLoc, 1 ); + + gl.activeTexture( gl.TEXTURE2 ); + gl.bindTexture( gl.TEXTURE_2D, fbo.texture(4) ); + gl.uniform1i(bloomConvolutionStep2Prog.uShadeSamplerLoc, 2 ); + + gl.uniform1i(bloomConvolutionStep2Prog.uDisplayTypeLoc, texToDisplay ); + + drawQuad(bloomConvolutionStep2Prog); +} + +var renderBloomOnePass= function () { + gl.useProgram(bloomOnePassProg.ref()); + + gl.disable(gl.DEPTH_TEST); + gl.clear(gl.COLOR_BUFFER_BIT); + + gl.activeTexture( gl.TEXTURE1 ); + gl.bindTexture( gl.TEXTURE_2D, fbo.texture(4) ); + gl.uniform1i(bloomOnePassProg.uShadeSamplerLoc, 1 ); + + gl.activeTexture( gl.TEXTURE2 ); + gl.bindTexture( gl.TEXTURE_2D, fbo2.texture(4) ); + gl.uniform1i(bloomOnePassProg.uPostSamplerLoc, 2 ); + + gl.uniform1i(bloomOnePassProg.uDisplayTypeLoc, texToDisplay ); + + drawQuad(bloomOnePassProg); +} - gl.activeTexture( gl.TEXTURE1 ); //normal - gl.bindTexture( gl.TEXTURE_2D, fbo.texture(1) ); - gl.uniform1i( diagProg.uNormalSamplerLoc, 1 ); - gl.activeTexture( gl.TEXTURE2 ); //color - gl.bindTexture( gl.TEXTURE_2D, fbo.texture(2) ); - gl.uniform1i( diagProg.uColorSamplerLoc, 2 ); +var renderDiagnostic = function () { + gl.useProgram(diagProg.ref()); - gl.activeTexture( gl.TEXTURE3 ); //depth - gl.bindTexture( gl.TEXTURE_2D, fbo.depthTexture() ); - gl.uniform1i( diagProg.uDepthSamplerLoc, 3 ); + gl.disable(gl.DEPTH_TEST); - // Bind necessary uniforms - gl.uniform1f( diagProg.uZNearLoc, zNear ); - gl.uniform1f( diagProg.uZFarLoc, zFar ); - gl.uniform1i( diagProg.uDisplayTypeLoc, texToDisplay ); - - drawQuad(diagProg); -}; + gl.clear(gl.COLOR_BUFFER_BIT); -var renderPost = function () { - gl.useProgram(postProg.ref()); + // Bind necessary textures + gl.activeTexture( gl.TEXTURE0 ); //position + gl.bindTexture( gl.TEXTURE_2D, fbo.texture(0) ); + gl.uniform1i( diagProg.uPosSamplerLoc, 0 ); - gl.disable(gl.DEPTH_TEST); - gl.clear(gl.COLOR_BUFFER_BIT); + gl.activeTexture( gl.TEXTURE1 ); //normal + gl.bindTexture( gl.TEXTURE_2D, fbo.texture(1) ); + gl.uniform1i( diagProg.uNormalSamplerLoc, 1 ); - // Bind necessary textures - gl.activeTexture( gl.TEXTURE4 ); - gl.bindTexture( gl.TEXTURE_2D, fbo.texture(4) ); - gl.uniform1i(postProg.uShadeSamplerLoc, 4 ); + gl.activeTexture( gl.TEXTURE2 ); //color + gl.bindTexture( gl.TEXTURE_2D, fbo.texture(2) ); + gl.uniform1i( diagProg.uColorSamplerLoc, 2 ); + + gl.activeTexture( gl.TEXTURE3 ); //depth + gl.bindTexture( gl.TEXTURE_2D, fbo.depthTexture() ); + gl.uniform1i( diagProg.uDepthSamplerLoc, 3 ); + + // Bind necessary uniforms + gl.uniform1f( diagProg.uZNearLoc, zNear ); + gl.uniform1f( diagProg.uZFarLoc, zFar ); + gl.uniform1i( diagProg.uDisplayTypeLoc, texToDisplay ); - drawQuad(postProg); + drawQuad(diagProg); }; var initGL = function (canvasId, messageId) { @@ -301,14 +423,14 @@ var initGL = function (canvasId, messageId) { // Set up WebGL stuff gl.viewport(0, 0, canvas.width, canvas.height); gl.clearColor(0.3, 0.3, 0.3, 1.0); - gl.enable(gl.DEPTH_TEST); + gl.enable(gl.DEPTH_TEST);//Activate the depth test function gl.depthFunc(gl.LESS); }; var initCamera = function () { // Setup camera persp = mat4.create(); - mat4.perspective(persp, todeg(60), canvas.width / canvas.height, 0.1, 2000); + mat4.perspective(persp, todeg(60), canvas.width / canvas.height, 20, zFar); camera = CIS565WEBGLCORE.createCamera(CAMERA_TRACKING_TYPE); camera.goHome([0, 0, 4]); @@ -318,25 +440,49 @@ var initCamera = function () { window.onkeydown = function (e) { interactor.onKeyDown(e); switch(e.keyCode) { - case 48: + case 48://0 isDiagnostic = false; + texToDisplay = 0; break; - case 49: + case 49://1 isDiagnostic = true; texToDisplay = 1; break; - case 50: + case 50://2 isDiagnostic = true; texToDisplay = 2; break; - case 51: + case 51://3 isDiagnostic = true; texToDisplay = 3; break; - case 52: + case 52://4 isDiagnostic = true; texToDisplay = 4; break; + case 53://5 + isDiagnostic = false; + texToDisplay = 5; + break; + + case 54://6 + isDiagnostic = false; + texToDisplay = 6; + break; + case 55://7 + isDiagnostic = false; + texToDisplay = 7; + break; + + case 56://8 + isDiagnostic = false; + texToDisplay = 8; + break; + + case 57://9 + isDiagnostic = false; + texToDisplay = 9; + break; } } }; @@ -346,8 +492,9 @@ var initObjs = function () { objloader = CIS565WEBGLCORE.createOBJLoader(); // Load the OBJ from file - objloader.loadFromFile(gl, "assets/models/suzanne.obj", null); - + //objloader.loadFromFile(gl, "assets/models/suzanne.obj", null); + objloader.loadFromFile(gl, "assets/models/crytek-sponza/sponza.obj", "assets/models/crytek-sponza/sponza.mtl"); + // Add callback to upload the vertices once loaded objloader.addCallback(function () { model = new Model(gl, objloader); @@ -361,7 +508,7 @@ var initObjs = function () { quad.ibo = gl.createBuffer(); quad.tbo = gl.createBuffer(); - gl.bindBuffer(gl.ARRAY_BUFFER, quad.vbo); + gl.bindBuffer(gl.ARRAY_BUFFER, quad.vbo);// gl.ARRAY_BUFFER is target and quad.vbo is buffer gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(screenQuad.vertices), gl.STATIC_DRAW); gl.bindBuffer(gl.ARRAY_BUFFER, quad.tbo); @@ -379,8 +526,8 @@ var initShaders = function () { passProg = CIS565WEBGLCORE.createShaderProgram(); // Load the shader source asynchronously - passProg.loadShader(gl, "assets/shader/deferred/pass.vert", "assets/shader/deferred/pass.frag"); - + //passProg.loadShader(gl, "assets/shader/deferred/pass.vert", "assets/shader/deferred/pass.frag"); + passProg.loadShader(gl, "assets/shader/deferred/pass.vert", null); // Register the necessary callback functions passProg.addCallback( function() { gl.useProgram(passProg.ref()); @@ -399,86 +546,140 @@ var initShaders = function () { CIS565WEBGLCORE.registerAsyncObj(gl, passProg); } else { - posProg = CIS565WEBGLCORE.createShaderProgram(); - posProg.loadShader(gl, "assets/shader/deferred/posPass.vert", "assets/shader/deferred/posPass.frag"); - posProg.addCallback(function() { - posProg.aVertexPosLoc = gl.getAttribLocation(posProg.ref(), "a_pos"); - - posProg.uModelViewLoc = gl.getUniformLocation(posProg.ref(), "u_modelview"); - posProg.uMVPLoc = gl.getUniformLocation(posProg.ref(), "u_mvp"); - }); - - CIS565WEBGLCORE.registerAsyncObj(gl, posProg); - - normProg = CIS565WEBGLCORE.createShaderProgram(); - normProg.loadShader(gl, "assets/shader/deferred/normPass.vert", "assets/shader/deferred/normPass.frag"); - normProg.addCallback(function() { - normProg.aVertexPosLoc = gl.getAttribLocation(normProg.ref(), "a_pos"); - normProg.aVertexNormalLoc = gl.getAttribLocation(normProg.ref(), "a_normal"); + posProg = CIS565WEBGLCORE.createShaderProgram(); + posProg.loadShader(gl, "assets/shader/deferred/posPass.vert", "assets/shader/deferred/posPass.frag"); + posProg.addCallback(function() { + posProg.aVertexPosLoc = gl.getAttribLocation(posProg.ref(), "a_pos"); + + posProg.uModelViewLoc = gl.getUniformLocation(posProg.ref(), "u_modelview"); + posProg.uMVPLoc = gl.getUniformLocation(posProg.ref(), "u_mvp"); + }); + + CIS565WEBGLCORE.registerAsyncObj(gl, posProg); + + normProg = CIS565WEBGLCORE.createShaderProgram(); + normProg.loadShader(gl, "assets/shader/deferred/normPass.vert", "assets/shader/deferred/normPass.frag"); + normProg.addCallback(function() { + normProg.aVertexPosLoc = gl.getAttribLocation(normProg.ref(), "a_pos"); + normProg.aVertexNormalLoc = gl.getAttribLocation(normProg.ref(), "a_normal"); + + normProg.uMVPLoc = gl.getUniformLocation(normProg.ref(), "u_mvp"); + normProg.uNormalMatLoc = gl.getUniformLocation(normProg.ref(), "u_normalMat"); + }); + + CIS565WEBGLCORE.registerAsyncObj(gl, normProg); + + colorProg = CIS565WEBGLCORE.createShaderProgram(); + colorProg.loadShader(gl, "assets/shader/deferred/colorPass.vert", "assets/shader/deferred/colorPass.frag"); + colorProg.addCallback(function(){ + colorProg.aVertexPosLoc = gl.getAttribLocation(colorProg.ref(), "a_pos"); + + colorProg.uMVPLoc = gl.getUniformLocation(colorProg.ref(), "u_mvp"); + }); + + CIS565WEBGLCORE.registerAsyncObj(gl, colorProg); + } + + // Create shader program for diagnostic + diagProg = CIS565WEBGLCORE.createShaderProgram(); + diagProg.loadShader(gl, "assets/shader/deferred/quad.vert", "assets/shader/deferred/diagnostic.frag"); + diagProg.addCallback( function() { + diagProg.aVertexPosLoc = gl.getAttribLocation( diagProg.ref(), "a_pos" ); + diagProg.aVertexTexcoordLoc = gl.getAttribLocation( diagProg.ref(), "a_texcoord" ); + + diagProg.uPosSamplerLoc = gl.getUniformLocation( diagProg.ref(), "u_positionTex"); + diagProg.uNormalSamplerLoc = gl.getUniformLocation( diagProg.ref(), "u_normalTex"); + diagProg.uColorSamplerLoc = gl.getUniformLocation( diagProg.ref(), "u_colorTex"); + diagProg.uDepthSamplerLoc = gl.getUniformLocation( diagProg.ref(), "u_depthTex"); + + diagProg.uZNearLoc = gl.getUniformLocation( diagProg.ref(), "u_zNear" ); + diagProg.uZFarLoc = gl.getUniformLocation( diagProg.ref(), "u_zFar" ); + diagProg.uDisplayTypeLoc = gl.getUniformLocation( diagProg.ref(), "u_displayType" ); + }); + CIS565WEBGLCORE.registerAsyncObj(gl, diagProg); + + // Create shader program for shade + shadeProg = CIS565WEBGLCORE.createShaderProgram(); + shadeProg.loadShader(gl, "assets/shader/deferred/quad.vert", "assets/shader/deferred/diffuse.frag"); + shadeProg.addCallback( function() { + shadeProg.aVertexPosLoc = gl.getAttribLocation( shadeProg.ref(), "a_pos" ); + shadeProg.aVertexTexcoordLoc = gl.getAttribLocation( shadeProg.ref(), "a_texcoord" ); + + shadeProg.uPosSamplerLoc = gl.getUniformLocation( shadeProg.ref(), "u_positionTex"); + shadeProg.uNormalSamplerLoc = gl.getUniformLocation( shadeProg.ref(), "u_normalTex"); + shadeProg.uColorSamplerLoc = gl.getUniformLocation( shadeProg.ref(), "u_colorTex"); + shadeProg.uDepthSamplerLoc = gl.getUniformLocation( shadeProg.ref(), "u_depthTex"); + shadeProg.uModelViewLoc = gl.getUniformLocation(shadeProg.ref(), "u_modelview"); + + shadeProg.uZNearLoc = gl.getUniformLocation( shadeProg.ref(), "u_zNear" ); + shadeProg.uZFarLoc = gl.getUniformLocation( shadeProg.ref(), "u_zFar" ); + shadeProg.uDisplayTypeLoc = gl.getUniformLocation( shadeProg.ref(), "u_displayType" ); + + + }); + CIS565WEBGLCORE.registerAsyncObj(gl, shadeProg); + + // Create shader program for post-process + postProg = CIS565WEBGLCORE.createShaderProgram(); + postProg.loadShader(gl, "assets/shader/deferred/quad.vert", "assets/shader/deferred/post.frag"); + postProg.addCallback( function() { + postProg.aVertexPosLoc = gl.getAttribLocation( postProg.ref(), "a_pos" ); + postProg.aVertexTexcoordLoc = gl.getAttribLocation( postProg.ref(), "a_texcoord" ); + + postProg.uPosSamplerLoc = gl.getUniformLocation( postProg.ref(), "u_positionTex"); + postProg.uNormalSamplerLoc = gl.getUniformLocation( postProg.ref(), "u_normalTex"); + postProg.uColorSamplerLoc = gl.getUniformLocation( postProg.ref(), "u_colorTex"); + postProg.uDepthSamplerLoc = gl.getUniformLocation( postProg.ref(), "u_depthTex"); + postProg.uModelViewLoc = gl.getUniformLocation(postProg.ref(), "u_modelview"); + postProg.uZNearLoc = gl.getUniformLocation( postProg.ref(), "u_zNear" ); + postProg.uZFarLoc = gl.getUniformLocation( postProg.ref(), "u_zFar" ); + postProg.uTimeLoc = gl.getUniformLocation( postProg.ref(), "u_time" ); + + postProg.uShadeSamplerLoc = gl.getUniformLocation( postProg.ref(), "u_shadeTex"); + postProg.uDisplayTypeLoc = gl.getUniformLocation( postProg.ref(), "u_displayType" ); + }); + CIS565WEBGLCORE.registerAsyncObj(gl, postProg); + + // Create shader program for bloomConvolutionStep1-process + bloomConvolutionStep1Prog = CIS565WEBGLCORE.createShaderProgram(); + bloomConvolutionStep1Prog.loadShader(gl, "assets/shader/deferred/quad.vert", "assets/shader/deferred/bloomConvolutionStep1.frag"); + bloomConvolutionStep1Prog.addCallback( function() { + bloomConvolutionStep1Prog.aVertexPosLoc = gl.getAttribLocation( bloomConvolutionStep1Prog.ref(), "a_pos" ); + bloomConvolutionStep1Prog.aVertexTexcoordLoc = gl.getAttribLocation( bloomConvolutionStep1Prog.ref(), "a_texcoord" ); + + bloomConvolutionStep1Prog.uPostSamplerLoc = gl.getUniformLocation(bloomConvolutionStep1Prog.ref(), "u_postTex"); + bloomConvolutionStep1Prog.uShadeSamplerLoc = gl.getUniformLocation(bloomConvolutionStep1Prog.ref(), "u_shadeTex"); + bloomConvolutionStep1Prog.uDisplayTypeLoc = gl.getUniformLocation(bloomConvolutionStep1Prog.ref(), "u_displayType" ); + + }); + CIS565WEBGLCORE.registerAsyncObj(gl, bloomConvolutionStep1Prog); + + // Create shader program for bloomConvolutionStep2-process + bloomConvolutionStep2Prog = CIS565WEBGLCORE.createShaderProgram(); + bloomConvolutionStep2Prog.loadShader(gl, "assets/shader/deferred/quad.vert", "assets/shader/deferred/bloomConvolutionStep2.frag"); + bloomConvolutionStep2Prog.addCallback( function() { + bloomConvolutionStep2Prog.aVertexPosLoc = gl.getAttribLocation( bloomConvolutionStep2Prog.ref(), "a_pos" ); + bloomConvolutionStep2Prog.aVertexTexcoordLoc = gl.getAttribLocation( bloomConvolutionStep2Prog.ref(), "a_texcoord" ); + + bloomConvolutionStep2Prog.uColorSamplerLoc = gl.getUniformLocation(bloomConvolutionStep2Prog.ref(), "u_colorTex"); + bloomConvolutionStep2Prog.uShadeSamplerLoc = gl.getUniformLocation(bloomConvolutionStep2Prog.ref(), "u_shadeTex"); + bloomConvolutionStep2Prog.uDisplayTypeLoc = gl.getUniformLocation(bloomConvolutionStep2Prog.ref(), "u_displayType" ); + }); + CIS565WEBGLCORE.registerAsyncObj(gl, bloomConvolutionStep2Prog); + + // Create shader program for bloomOnePass-process + bloomOnePassProg = CIS565WEBGLCORE.createShaderProgram(); + bloomOnePassProg.loadShader(gl, "assets/shader/deferred/quad.vert", "assets/shader/deferred/bloomOnePass.frag"); + bloomOnePassProg.addCallback( function() { + bloomOnePassProg.aVertexPosLoc = gl.getAttribLocation( bloomOnePassProg.ref(), "a_pos" ); + bloomOnePassProg.aVertexTexcoordLoc = gl.getAttribLocation( bloomOnePassProg.ref(), "a_texcoord" ); + + bloomOnePassProg.uPostSamplerLoc = gl.getUniformLocation(bloomOnePassProg.ref(), "u_postTex"); + bloomOnePassProg.uShadeSamplerLoc = gl.getUniformLocation(bloomOnePassProg.ref(), "u_shadeTex"); + bloomOnePassProg.uDisplayTypeLoc = gl.getUniformLocation(bloomOnePassProg.ref(), "u_displayType" ); + }); + CIS565WEBGLCORE.registerAsyncObj(gl, bloomOnePassProg); - normProg.uMVPLoc = gl.getUniformLocation(normProg.ref(), "u_mvp"); - normProg.uNormalMatLoc = gl.getUniformLocation(normProg.ref(), "u_normalMat"); - }); - - CIS565WEBGLCORE.registerAsyncObj(gl, normProg); - - colorProg = CIS565WEBGLCORE.createShaderProgram(); - colorProg.loadShader(gl, "assets/shader/deferred/colorPass.vert", "assets/shader/deferred/colorPass.frag"); - colorProg.addCallback(function(){ - colorProg.aVertexPosLoc = gl.getAttribLocation(colorProg.ref(), "a_pos"); - - colorProg.uMVPLoc = gl.getUniformLocation(colorProg.ref(), "u_mvp"); - }); - - CIS565WEBGLCORE.registerAsyncObj(gl, colorProg); - } - - // Create shader program for diagnostic - diagProg = CIS565WEBGLCORE.createShaderProgram(); - diagProg.loadShader(gl, "assets/shader/deferred/quad.vert", "assets/shader/deferred/diagnostic.frag"); - diagProg.addCallback( function() { - diagProg.aVertexPosLoc = gl.getAttribLocation( diagProg.ref(), "a_pos" ); - diagProg.aVertexTexcoordLoc = gl.getAttribLocation( diagProg.ref(), "a_texcoord" ); - - diagProg.uPosSamplerLoc = gl.getUniformLocation( diagProg.ref(), "u_positionTex"); - diagProg.uNormalSamplerLoc = gl.getUniformLocation( diagProg.ref(), "u_normalTex"); - diagProg.uColorSamplerLoc = gl.getUniformLocation( diagProg.ref(), "u_colorTex"); - diagProg.uDepthSamplerLoc = gl.getUniformLocation( diagProg.ref(), "u_depthTex"); - - diagProg.uZNearLoc = gl.getUniformLocation( diagProg.ref(), "u_zNear" ); - diagProg.uZFarLoc = gl.getUniformLocation( diagProg.ref(), "u_zFar" ); - diagProg.uDisplayTypeLoc = gl.getUniformLocation( diagProg.ref(), "u_displayType" ); - }); - CIS565WEBGLCORE.registerAsyncObj(gl, diagProg); - - // Create shader program for shade - shadeProg = CIS565WEBGLCORE.createShaderProgram(); - shadeProg.loadShader(gl, "assets/shader/deferred/quad.vert", "assets/shader/deferred/diffuse.frag"); - shadeProg.addCallback( function() { - shadeProg.aVertexPosLoc = gl.getAttribLocation( shadeProg.ref(), "a_pos" ); - shadeProg.aVertexTexcoordLoc = gl.getAttribLocation( shadeProg.ref(), "a_texcoord" ); - - shadeProg.uPosSamplerLoc = gl.getUniformLocation( shadeProg.ref(), "u_positionTex"); - shadeProg.uNormalSamplerLoc = gl.getUniformLocation( shadeProg.ref(), "u_normalTex"); - shadeProg.uColorSamplerLoc = gl.getUniformLocation( shadeProg.ref(), "u_colorTex"); - shadeProg.uDepthSamplerLoc = gl.getUniformLocation( shadeProg.ref(), "u_depthTex"); - - shadeProg.uZNearLoc = gl.getUniformLocation( shadeProg.ref(), "u_zNear" ); - shadeProg.uZFarLoc = gl.getUniformLocation( shadeProg.ref(), "u_zFar" ); - shadeProg.uDisplayTypeLoc = gl.getUniformLocation( shadeProg.ref(), "u_displayType" ); - }); - CIS565WEBGLCORE.registerAsyncObj(gl, shadeProg); - - // Create shader program for post-process - postProg = CIS565WEBGLCORE.createShaderProgram(); - postProg.loadShader(gl, "assets/shader/deferred/quad.vert", "assets/shader/deferred/post.frag"); - postProg.addCallback( function() { - postProg.aVertexPosLoc = gl.getAttribLocation( postProg.ref(), "a_pos" ); - postProg.aVertexTexcoordLoc = gl.getAttribLocation( postProg.ref(), "a_texcoord" ); - - postProg.uShadeSamplerLoc = gl.getUniformLocation( postProg.ref(), "u_shadeTex"); - }); - CIS565WEBGLCORE.registerAsyncObj(gl, postProg); }; var initFramebuffer = function () { @@ -487,4 +688,25 @@ var initFramebuffer = function () { console.log("FBO Initialization failed"); return; } + + fbo2 = CIS565WEBGLCORE.createFBO(); + if (!fbo2.initialize(gl, canvas.width, canvas.height)) { + console.log("FBO2 Initialization failed"); + return; + } }; + +function initStats() { + stats = new Stats(); + stats.setMode(0); // 0: fps, 1: ms + + // Align top-left + stats.domElement.style.position = 'absolute'; + stats.domElement.style.left = '0px'; + stats.domElement.style.top = '0px'; + + document.body.appendChild( stats.domElement ); + + + return stats; +} diff --git a/js/screenQuad.js b/js/screenQuad.js index dde90a5..abd291b 100644 --- a/js/screenQuad.js +++ b/js/screenQuad.js @@ -8,10 +8,14 @@ var screenQuad ={ -1.0, 1.0, 0.0 ], texcoords:[ - 1.0, 0.0, - 0.0, 0.0, - 0.0, 1.0, - 1.0, 1.0 + 0.0, 0.0, + 1.0, 0.0, + 1.0, 1.0, + 0.0, 1.0 + //1.0, 0.0, + //0.0, 0.0, + //0.0, 1.0, + //1.0, 1.0 ], indices: [ 0, 1, 3, diff --git a/js/stats.min.js b/js/stats.min.js new file mode 100644 index 0000000..5c315f5 --- /dev/null +++ b/js/stats.min.js @@ -0,0 +1,6 @@ +// stats.js - http://github.com/mrdoob/stats.js +var Stats=function(){var l=Date.now(),m=l,g=0,n=Infinity,o=0,h=0,p=Infinity,q=0,r=0,s=0,f=document.createElement("div");f.id="stats";f.addEventListener("mousedown",function(b){b.preventDefault();t(++s%2)},!1);f.style.cssText="width:80px;opacity:0.9;cursor:pointer";var a=document.createElement("div");a.id="fps";a.style.cssText="padding:0 0 3px 3px;text-align:left;background-color:#002";f.appendChild(a);var i=document.createElement("div");i.id="fpsText";i.style.cssText="color:#0ff;font-family:Helvetica,Arial,sans-serif;font-size:9px;font-weight:bold;line-height:15px"; +i.innerHTML="FPS";a.appendChild(i);var c=document.createElement("div");c.id="fpsGraph";c.style.cssText="position:relative;width:74px;height:30px;background-color:#0ff";for(a.appendChild(c);74>c.children.length;){var j=document.createElement("span");j.style.cssText="width:1px;height:30px;float:left;background-color:#113";c.appendChild(j)}var d=document.createElement("div");d.id="ms";d.style.cssText="padding:0 0 3px 3px;text-align:left;background-color:#020;display:none";f.appendChild(d);var k=document.createElement("div"); +k.id="msText";k.style.cssText="color:#0f0;font-family:Helvetica,Arial,sans-serif;font-size:9px;font-weight:bold;line-height:15px";k.innerHTML="MS";d.appendChild(k);var e=document.createElement("div");e.id="msGraph";e.style.cssText="position:relative;width:74px;height:30px;background-color:#0f0";for(d.appendChild(e);74>e.children.length;)j=document.createElement("span"),j.style.cssText="width:1px;height:30px;float:left;background-color:#131",e.appendChild(j);var t=function(b){s=b;switch(s){case 0:a.style.display= +"block";d.style.display="none";break;case 1:a.style.display="none",d.style.display="block"}};return{REVISION:11,domElement:f,setMode:t,begin:function(){l=Date.now()},end:function(){var b=Date.now();g=b-l;n=Math.min(n,g);o=Math.max(o,g);k.textContent=g+" MS ("+n+"-"+o+")";var a=Math.min(30,30-30*(g/200));e.appendChild(e.firstChild).style.height=a+"px";r++;b>m+1E3&&(h=Math.round(1E3*r/(b-m)),p=Math.min(p,h),q=Math.max(q,h),i.textContent=h+" FPS ("+p+"-"+q+")",a=Math.min(30,30-30*(h/100)),c.appendChild(c.firstChild).style.height= +a+"px",m=b,r=0);return b},update:function(){l=this.end()}}}; \ No newline at end of file diff --git a/result/bloom.jpg b/result/bloom.jpg new file mode 100644 index 0000000..a448eb0 Binary files /dev/null and b/result/bloom.jpg differ diff --git a/result/bloom2.jpg b/result/bloom2.jpg new file mode 100644 index 0000000..fe847af Binary files /dev/null and b/result/bloom2.jpg differ diff --git a/result/depth.jpg b/result/depth.jpg new file mode 100644 index 0000000..f0cdc14 Binary files /dev/null and b/result/depth.jpg differ diff --git a/result/diffuse_bling.jpg b/result/diffuse_bling.jpg new file mode 100644 index 0000000..71cd62b Binary files /dev/null and b/result/diffuse_bling.jpg differ diff --git a/result/performance.jpg b/result/performance.jpg new file mode 100644 index 0000000..29a49f8 Binary files /dev/null and b/result/performance.jpg differ diff --git a/result/performance2.jpg b/result/performance2.jpg new file mode 100644 index 0000000..c2cf2e4 Binary files /dev/null and b/result/performance2.jpg differ diff --git a/result/silhouette.jpg b/result/silhouette.jpg new file mode 100644 index 0000000..1daaa5a Binary files /dev/null and b/result/silhouette.jpg differ diff --git a/result/ssao.jpg b/result/ssao.jpg new file mode 100644 index 0000000..a972ed8 Binary files /dev/null and b/result/ssao.jpg differ diff --git a/result/toon.jpg b/result/toon.jpg new file mode 100644 index 0000000..4860f57 Binary files /dev/null and b/result/toon.jpg differ