Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ambient Occlusion doesn't work! #10106

Open
happyfeet1996 opened this issue Feb 15, 2022 · 42 comments · May be fixed by #12316
Open

Ambient Occlusion doesn't work! #10106

happyfeet1996 opened this issue Feb 15, 2022 · 42 comments · May be fixed by #12316

Comments

@happyfeet1996
Copy link

Sandcastle example:Ambient Occlusion

Browser:Chrome

Operating System: macos

image
image
when i check ambient occlusion,full screen change to black and can not look anything!

@ggetz
Copy link
Contributor

ggetz commented Feb 15, 2022

@happyfeet1996 I'm unable to reproduce on MacOS in Chrome. Can you provide some more details about your system by visiting this page and taking a screenshot?

@happyfeet1996
Copy link
Author

@happyfeet1996 I'm unable to reproduce on MacOS in Chrome. Can you provide some more details about your system by visiting this page and taking a screenshot?
image
image
image
image

@happyfeet1996
Copy link
Author

it works today,maybe my network go wrong that day

@sanjeetsuhag
Copy link
Contributor

Reopening this issues since it does seem to occur on certain hardware. On my MacBook Pro (16-inch, 2019) running macOS 12.2.1, I see the following results:

Browser Sandcastle WebGL Report
Chrome 100.0.4896.60 Screen Shot 2022-04-04 at 11 24 02 AM Screen Shot 2022-04-04 at 11 24 05 AM
Firefox 98.0.2 Screen Shot 2022-04-04 at 11 23 05 AM Screen Shot 2022-04-04 at 11 23 08 AM
Safari Version 15.3 (17612.4.9.1.8) Screen Shot 2022-04-04 at 11 22 58 AM Screen Shot 2022-04-04 at 11 23 01 AM

Screen Shot 2022-04-04 at 11 52 45 AM

@shotamatsuda
Copy link

shotamatsuda commented Apr 14, 2023

I also reproduce the same issue on macOS Chrome with eGPU. Specifying fragment alpha in AmbientOcclusionModulate seems to resolve it.

- out_FragColor.rgb = ambientOcclusionOnly ? ao : ao * color;
+ out_FragColor = vec4(ambientOcclusionOnly ? ao : ao * color, 1.0);

@ggetz
Copy link
Contributor

ggetz commented Apr 17, 2023

Thanks for the info @shotamatsuda.

@jjhembd Could you please verify the potential fix in the comment above, and recommend a PR if appropriate?

@jjhembd
Copy link
Contributor

jjhembd commented Apr 18, 2023

Hi @shotamatsuda, thanks for the tip. However, on my machine (Chrome on Windows), I get the same image even if I set the alpha to 0.0

    out_FragColor.rgb = ambientOcclusionOnly ? ao : ao * color;
    out_FragColor.a = 0.0;

image

Does zero alpha also work on your machine, or is the 1.0 value important?

@shotamatsuda
Copy link

shotamatsuda commented Apr 18, 2023

@jjhembd In my specific hardware, setting the alpha to zero results in a black screen, as well as not setting the value at all.

The importance of 1.0 depends on the blending function of the framebuffer. I haven't fully traced where blending is applied for framebuffers used in the post-processing stage, but setting true as the default value for PassState.blendingEnabled also resolves the issue.
https://github.com/CesiumGS/cesium/blob/1.104/packages/engine/Source/Renderer/PassState.js#L36

This suggests to me that the blending function might not be defined, leading to a hardware or driver-dependent issue. Specifying the alpha value explicitly compensates for it.

@shotamatsuda
Copy link

For the record, my setup is macOS 13.3.1, Chrome 112.0.5615.49, and AMD Radeon RX 6800 XT GPU. I encountered the issue before Chrome 100 on macOS Monterey with the same eGPU, and it doesn't reproduce in Safari or Firefox. I first noticed the issue around Cesium 1.89.0, but I'm unsure if it is caused by the Chrome version update.

image

@jjhembd
Copy link
Contributor

jjhembd commented Apr 20, 2023

Hi @shotamatsuda, thanks again for all the info.

I think it was a mistake in AmbientOcclusionModulate.glsl to leave out_FragColor.a undefined. We don't do this in any other shaders.

There are two input textures in AmbientOcclusionModulate:

  1. colorTexture is the rendered scene before the ambient occlusion stage.
  2. ambientOcclusionTexture is the shading factor computed in AmbientOcclusionGenerate.glsl

ambientOcclusionTexture has its alpha value set to a constant 1.0. I think it would make sense to preserve the alpha of colorTexture (which will usually also be 1.0, but not always). This would simplify AmbientOcclusionModulate to read as follows:

    vec4 color = texture(colorTexture, v_textureCoordinates);
    vec4 ao = texture(ambientOcclusionTexture, v_textureCoordinates);
    out_FragColor = ambientOcclusionOnly ? ao : ao * color;

The above code gives the same result as before on my setup. Can you please verify if it works on your hardware? If so, we should be ready for a PR.

@shotamatsuda
Copy link

@jjhembd Yes, I confirmed it works with my hardware, and that should solve OP's issue. Deriving alpha from colorTexture makes sense to me.

@ggetz ggetz moved this from Needs triage to In Progress in CesiumJS Issue/PR backlog Apr 21, 2023
@ggetz ggetz moved this from In Progress to Notable backlog items in CesiumJS Issue/PR backlog Apr 24, 2023
@ggetz
Copy link
Contributor

ggetz commented Sep 29, 2023

@shotamatsuda
Copy link

Major problem is in the quality of stochatic noise and the lack of denoise.
I made a library that implements Nvidia's HBAO and cross-bilateral filtering for Cesium. It's not a generic library, but you will get the concept.
https://github.com/takram-design-engineering/plateau-view/tree/main/libs/cesium-hbao
Live demo: https://plateau.takram.com

HBAO with cross-bilateral filter:
image

HBAO without cross-bilateral filter:
image

No HBAO:
image

@ggetz
Copy link
Contributor

ggetz commented Sep 29, 2023

Awesome demo @shotamatsuda! Would you have any interest in contributing this AO algorithm implementation?

@jjhembd
Copy link
Contributor

jjhembd commented Aug 16, 2024

One issue with our current implementation: it computes a non-zero obscurance on flat surfaces.
image

This may be related to error in the depth buffer. We are computing obscurance based on the "horizon angle" relative to the local surface normal, as reconstructed from screen position and depth buffer. On a flat surface, the horizon angle should be 90° from the normal, and obscurance should be zero. However, the reconstructed normals don't look right. Here is a rendering of the difference between the reconstructed normals at adjacent pixels.
image

It shows non-zero values at the edges/curves of the model, as expected. But flat surfaces also show significant noise.

@ggetz
Copy link
Contributor

ggetz commented Aug 16, 2024

CC #11067?

@jjhembd
Copy link
Contributor

jjhembd commented Aug 16, 2024

The reconstructed normals are better if we don't use a log depth buffer.

Current result, with log depth buffer:
image

After forcing Scene.defaultLogDepthBuffer = false:
image

@yoyomule
Copy link

I hope AO will have better effects and performance. It is a good way to improve the effect of Cesium models. At present, it needs to be optimized in terms of performance and effects. Thank you for your efforts.

@jjhembd
Copy link
Contributor

jjhembd commented Aug 31, 2024

As discussed offline, the depth buffer for post-processing has a precision problem. To address this, we will try the following (in order):

  • Use a higher precision depth buffer during the initial rendering. We currently use the DEPTH24_STENCIL8 format, corresponding to the UNSIGNED_INT_24_8 data type, which is available in both WebGL2 and WebGL1 (with the WEBGL_depth_texture extension. In a WebGL2 context, we can try the DEPTH32F_STENCIL8 format, corresponding to the FLOAT_32_UNSIGNED_INT_24_8_REV data type.
  • Copy the depth buffer from each render pass into a floating point texture, instead of converting it to fixed precision and packing it to an RGBA texture.
  • Fix the depth buffer for multi-frustum rendering (mainly relevant for linear depth buffers). We currently copy the raw result of each frustum to the same RGBA texture, without accounting for the different near and far planes that the value is referenced to. Once we switch to a floating point texture, we can project the depth value back to eye coordinates before copying, so that all frustums will be written in the same coordinate system.
  • Try the EXT_clip_control extension (only working in recent versions of Chrome).

@jjhembd
Copy link
Contributor

jjhembd commented Aug 31, 2024

Here is an updated Sandcastle displaying the gradient of the depth buffer.

Log depth shows strong (flickering) noise in the distance, as well as some spuriously large gradients on smooth surfaces nearer the camera.
image

Linear depth reduces the noise nearer the camera. However, only the frustum nearest the camera is valid. The back frustum is washed out.
image

@jjhembd
Copy link
Contributor

jjhembd commented Aug 31, 2024

I updated the depth buffer to use DEPTH32F_STENCIL8 when in a WebGL2 context. Note that this doubles the size of the depth buffer--because of alignment issues, the values are stored as 64 bits per pixel.
Unfortunately, it does not have any effect on the noise.

DEPTH24_STENCIL8:
image

DEPTH32F_STENCIL8:
image

See the depth-precision branch.

@jjhembd
Copy link
Contributor

jjhembd commented Sep 12, 2024

@ggetz highlighted this commit, before which AO did not show a noisy background on flat surfaces. Here is a quick screenshot of AO from a prior commit (70fc40e):
image
Note the absence of any occlusion on flat surfaces, even with the "Bias" set to zero.

For comparison, here is a screenshot from the current main branch:
image

And here is the result using the DEPTH32F_STENCIL8 format and a 1m near plane (instead of 0.1 m), both of which reduce the noise in the depth buffer:
image

While some minor artifacts are reduced by the DEPTH32F_STENCIL8 format and a 1m near plane, the main problem is still there: a background noisiness on flat surfaces. This appears to be a problem introduced in #9966.

I also captured the gradient of the depth buffer prior to #9966. It does not look significantly different from the depth buffer from the current main branch.
image

The background noise problem appears to be something more fundamental--perhaps a depth buffer is being overwritten somewhere?

@jjhembd
Copy link
Contributor

jjhembd commented Sep 12, 2024

The background noise is generated when the HBAO stepping is along a direction not aligned with screen X/Y.

Here is the AO output with stepping only along X/Y. This is achieved by overriding the "randomVal" texture and using a constant rotation of 0 at every sample.
image

Note the significant reduction in the background on flat surfaces. The remaining background is the result of the noise in the depth buffer. It is reduced by setting the near plane to 1m (instead of 0.1m):
image

This is already approaching the result before #9966. One possible explanation is that before #9966, we had 2 bugs canceling each other:

  1. The random texture was not being read correctly, leading to a rotation of 0 (or nearly 0) being used at every pixel.
  2. The AO was already incorrect along non-screen axes, but this bug was hidden by the random texture problem.

Then #9966 fixed the first bug, exposing the second bug. But this is speculation--actually tracing texture handling through the major refactor in #9966 would take significant time.

Here is the result using a constant rotation of czm_piOverFour:
image

Note the (incorrect) non-zero values on every flat surface. The current noisy result in main is a random mix of various non-zero values at different rotations.

@jjhembd
Copy link
Contributor

jjhembd commented Sep 12, 2024

A remarkably similar-looking bug, which appears to be related to incorrect normals. In this case, I think world-space geometry was leaking into the normal calculation.
https://www.reddit.com/r/GraphicsProgramming/comments/sd17k0/trying_to_implement_hbao_shader/
The similar result was what made me suspect our normals, and therefore the depth buffer from which they were computed.

@jjhembd
Copy link
Contributor

jjhembd commented Sep 12, 2024

There were some incorrect geometry assumptions in our shader. For example,

//Reconstruct Normal Without Edge Removation
vec3 getNormalXEdge(vec3 posInCamera, float depthU, float depthD, float depthL, float depthR, vec2 pixelSize)
{
    vec4 posInCameraUp = clipToEye(v_textureCoordinates - vec2(0.0, pixelSize.y), depthU);
    vec4 posInCameraDown = clipToEye(v_textureCoordinates + vec2(0.0, pixelSize.y), depthD);

Based on what the code is doing, the comment should actually read something like this:

// Reconstruct normal with edge removal

But this line:

    vec4 posInCameraUp = clipToEye(v_textureCoordinates - vec2(0.0, pixelSize.y), depthU);

appears to assume v_textureCoordinates is oriented with y = 0 at the top of the screen, when it actually starts from the bottom of the screen.

Overall, I think our current prototype AO is pretty rough. The choice of algorithm is good, it just needs a rewrite, with careful checking of the input and output of each step.

We now know the depth buffer input is somewhat problematic, but:

  • Setting near plane to 1m reduces the error
  • The remaining depth buffer error is relatively small (compared to other problems in the shader)

@jjhembd
Copy link
Contributor

jjhembd commented Sep 12, 2024

Some notes from an offline discussion:
Our HBAO is generating wrong values along any sampling axis that is not aligned with screen X/Y. This matters because each pixel rotates the sampling axes by a value that is randomized across neighboring pixels.

The error along rotated axes is much larger than the depth buffer error. It is not clear whether:

  • HBAO is sensitive to depth buffer error, and magnifies the problems, or
  • Our implementation has a bug affecting rotated sampling axes.

We will spend a little time reworking and testing each step of the AO implementation, under the preliminary assumption that the depth buffer is not the main problem. Initial tests/QCs will include:

  • Estimate the actual angular difference between adjacent reconstructed normals. (My earlier QC was arbitrarily scaled). If this is small (< 5 deg.) on flat surfaces, then the normals themselves may not be the problem.
  • Compare surface gradient (via native dFdx / dFdy) to slopes computed at sample points along rotated axes. Some other implementations snap to the texture grid, to ensure the sampled depth value and computed X/Y coordinates align at each step.
  • Adjust sample distance along rotated axes, to avoid resampling the same points. For a given rotation angle, I can manually adjust step size until the noise goes away. But we would need a different step size for each angle.

@jjhembd
Copy link
Contributor

jjhembd commented Sep 12, 2024

One possible explanation is that before #9966, we had 2 bugs canceling each other

I tried to confirm this, but can't. Before that PR, the result looks fine even with a constant rotation of czm_piOverFour:
image

@jjhembd
Copy link
Contributor

jjhembd commented Sep 13, 2024

Our first test was to check the angular difference between reconstructed normals at adjacent pixels.

Here are some images showing the sine of the angle between the normal at a given screen pixel, and the normal at the pixel just above it (Y + 1 pixel). The sine value is scaled by 11.47, which will make the image fully white if the angular difference exceeds 5 degrees.

From best to worst:

  1. Using linear depth with near plane at 1m
    image
    Note how the image is black on flat surfaces, with white outlining the edges of the model. This is as expected.

  2. Using log depth with near plane at 1m
    image
    Some linear noise appears on the far-away terrain, but it represents angular differences of less than 5 degrees. There are also some strange artifacts near the edges of the model. But overall, the result is close enough. The depth errors are small enough that we can move ahead with further debugging of the rest of the AO code.

  3. Using log depth with near plane 0.1m (current default setting)
    image
    The error between normals at adjacent pixels exceeds 5 degrees in many places. This will be a problem for AO.

I think we can move ahead for now and debug the rest of the AO process, using a near plane at 1m. We will need to come back later to revisit the errors with log depth and closer near planes.

@jjhembd
Copy link
Contributor

jjhembd commented Sep 13, 2024

Some additional checks just to be sure.

Angular difference between current pixel and the pixel to the right (X + 1 pixel)
image

Normals computed from tangents along the screen diagonal, difference along x = -y axis:
image

Normals computed from tangents along the screen diagonal, difference along x = y axis:
image

@jjhembd
Copy link
Contributor

jjhembd commented Sep 13, 2024

Difference between normals computed along screen axes vs. along diagonals
image

@jjhembd
Copy link
Contributor

jjhembd commented Sep 13, 2024

I found what changed in #9966: the depth stencil texture was changed from linear to nearest sampling.

We can recreate the old result in the current code by making 2 changes:

  • Construct depth stencil texture with the default (linear) sampler
  • Use a WebGL1 context (linear sampling of depth textures is not supported in WebGL2)

Current result with a WebGL2 context:
image

Current main with a WebGL1 context, nearest sampling. Note how some of the artifacts are reduced.
image

Current main with WebGL1 context, and linear sampling of the depth texture.
image

Just for interest: here's the linear sampling result with more angular sampling (16 instead of 4)
image

@jjhembd
Copy link
Contributor

jjhembd commented Sep 13, 2024

I don't think we want to require WebGL1 for AO. Two possible options:

  • Copy the depth values to a regular floating-point texture (not a depth format) for use in post-processing. This copy step would also give us an opportunity to fix the depth buffer when using multi-frustums, which would allow us to use linear depth values when we need a more accurate depth buffer.
  • Snap coordinates when sampling for AO, and make the necessary adjustments to the geometry calculations.

One thing that is not clear to me yet: why does log depth produce fewer artifacts in WebGL1 than in WebGL2?

@ggetz
Copy link
Contributor

ggetz commented Sep 16, 2024

I don't think we want to require WebGL1 for AO

I agree. Is doing linear interpolation in the shader a possibility?

@lilleyse Would you be able to weigh in on the solution here?

@lilleyse
Copy link
Contributor

I'm surprised linear sampling looks that much better. Is it because coordinates aren't snapped?

@jjhembd
Copy link
Contributor

jjhembd commented Sep 16, 2024

I'm surprised linear sampling looks that much better. Is it because coordinates aren't snapped?

Correct, the current shader does not snap coordinates to the texture grid. I am trying a snapping solution now.

Is doing linear interpolation in the shader a possibility?

Yes, though this would add to the cost. I think the main performance hit right now is the texture access, and a manually-coded linear interpolation would make that 4x. Snapping would be faster, if we can correctly account for the snapped geometry everywhere.

@jjhembd
Copy link
Contributor

jjhembd commented Sep 16, 2024

Here's a quick test fixing the coordinates to account for nearest sampling. This uses some WebGL2-only features like texelFetch.
image

The WebGL2 result still has more artifacts than WebGL1.

@jjhembd
Copy link
Contributor

jjhembd commented Sep 17, 2024

We now have 4 actionable steps to improve AO:

  1. Fix sampling of depth buffer. I have a working code in WebGL2 (ao-debug branch). I will spend another day to see if we can get WebGL1 working too.
  2. Expose parameters (with good defaults) for the number of sample points: both the number of angles, and the number of steps along each angle. Sampling a larger disc can give significantly better results.
  3. Improve scalability: the occlusion radius (lengthCap) is currently a fixed value in meters, which makes the AO effect disappear when zoomed out to city-wide scale. This value should scale with distance.
  4. Use a smarter smoothing on the raw AO result. The cross-bilateral filtering demonstrated by @shotamatsuda looks like a good option.

Other issues arising:

  1. Why do we get more artifacts in the depth buffer in WebGL2?
  2. Are other post-processing steps affected by a similar bug with the depth buffer sampling?

@jjhembd
Copy link
Contributor

jjhembd commented Sep 20, 2024

Item 1 above is addressed by #12201.
Items 2 and 3 are linked, and I think should be addressed together.

At each output pixel, we currently step rays outward along several angles to find the angle of the "horizon". The maximum distance of the ray-stepping defines the distance around the point where we allow nearby geometry to "occlude" the ambient light.

We currently have 3 factors defining this maximum occlusion distance:

  1. The distance-based weighting applied to each sample. The contribution from points far away from the output point is currently weighted by 1 - (distance / lengthCap) ** 2.
  2. Truncation at the lengthCap uniform. As we step to pixels away from the output pixel, we compute the Cartesian distance between the stepping point and the output. If it is above "lengthCap" in meters, the contribution of that pixel is ignored and further stepping along that angle is aborted. Unfortunately, this truncates the occlusion if the raymarching path is interrupted by a small foreground object (such as an overhead wire).
  3. The maximum number of steps along each angle (currently hard-coded to 6) along with the step size (in pixels). This may sometimes result in a hard truncation of the occlusion effect, if we reach the maximum number of steps before reaching a distance of lengthCap.

The main limitation of this approach is that parameters that work at one scale do not work when zooming out or in. For example, when zoomed on some details of the Power Plant model, an occlusion radius of less than 1m makes sense. But when zooming out to city scale, we might reasonably expect a skyscraper or other large building to occlude ambient light on buildings 10m or more away.

I first tried a naive approach: scale the occlusion radius based on distance from the camera. However, this makes a given feature become darker overall as it moves away from the camera, since the shadowed areas maintain their darkness, but their size grows linearly with distance.

One better approach: model the occlusion as something that blurs with distance--i.e., shadowed areas become wider, but less dark. In this way the overall darkness of a given feature would remain constant.

We can achieve a blurring effect by simplifying the flow as follows:

  1. Change the distance-based weighting to a normalized Gaussian distribution, with a variance proportional to distance from the camera. As the variance grows larger, the shadows become wider, but less dark, because the peak of the Gaussian becomes less pronounced.
  2. Compute the step size as scalar * gaussianVariance / maxStepCount. This ensures that every point will sample the entire intended radius.

We do not need to abort the stepping if a given sample point is too far away from the output point, because the Gaussian weighting will already make the contribution from that point negligible.

@jjhembd
Copy link
Contributor

jjhembd commented Sep 20, 2024

I have a preliminary version of the Gaussian-weighted AO in the ao-sampling-disc branch. The same parameter choices give reasonably consistent results at a wide range of scales.

Here is a quick comparison on the NYC buildings tileset. Some notes:

  • The initial view is deliberately chosen at an angle and time of day where the default lighting appears washed out.
  • The AO is heavily sampled: 16 angles, with 64 points along each angle.

No AO:
image

With AO:
image

AO term by itself:
image

@jjhembd
Copy link
Contributor

jjhembd commented Sep 20, 2024

A much closer view of the same scene, with the same parameters.

No AO:
image

With AO:
image

AO term by itself:
image

@ggetz
Copy link
Contributor

ggetz commented Sep 23, 2024

@jjhembd Screenshot look fantastic!

I don't see the ao-sampling-disc branch on GitHub. Do you need to push?

@jjhembd
Copy link
Contributor

jjhembd commented Sep 23, 2024

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
No open projects
Status: Notable backlog items
Development

Successfully merging a pull request may close this issue.

7 participants