-
-
Notifications
You must be signed in to change notification settings - Fork 135
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Direct Light Field Rendering for 3D Displays without Multiple View Sampling #679
Comments
Hello! Do you have a reference for what the final image looks like? Is it interleaved? And what the target device you're trying to render to is? This project previously supported the Looking Glass display (render X number of subframes to a single target render buffer) but it had to be removed after some refactoring so I'm trying to get a sense for how this differs and how dramatic some of these changes might be. I'll probably have some more questions about the four line-items but lets start with the above. |
The old LKG demo code is still available here, which uses this old QuiltPathTracingRenderer but is no longer working since the main path tracing renderer was changed over to use the new WebGLPathTracer class. The quilt renderer basically works by calculating each camera view and then rendering rendering each quilt tile iteratively and then rendering each again to add a new sample to each.
Why do you think rendering directly to an interlaced image would be so much faster? Converting from a quilt to an interleaved image will be extremely fast - especially compared to any of the path tracing shader evaluation. Pathtracing already has to be tiled to keep performance up and tendering interleaved would also remove any existing (admittedly still limited) ray coherence between pixels, possibly lowering path tracing performance further. I think a re-adding support for quilt rendering along with a utility for converting the quilt to an interleaved image would be the most flexible and help reenable support for LKG, as well. I think the best way to do this is to update the "WebGLPathTracer" class to support passing an ArrayCamera in to |
Thank you again for your patient response! You’re absolutely right—that’s the issue. I was thinking that many pixels in the Quilt would not end up in the final interlaced image, making them effectively redundant. However, since path tracing is accelerated using a tile-based approach, rendering individual rays separately would degrade performance. |
Every pixel in every "quilt" tile should be used in the final interleaved image - I don't believe there should be any wasted work in that respect.
I don't expect that the QuiltRenderer will currently work due to other changes but I left it here as reference so it could be readded at some later point. v0.20.0 is the last version before the WebGLPathTracer class refactor happened so it should work at that release (there's a link in the README, though you'll have to pull and run to try it). You can see some of the renders at the this blocks.glass profile. And some examples of the old quilt images: ![]() ![]()
Regarding the LKG logic, the hologram renderer uses WebXR to define the rendering behavior for multiple cameras in three.js. It also positions the cameras and camera quilt tiles for rendering based on the LKG parameters (see here). This XR camera is an ArrayCamera and can be retrieved using WebXRManager.getCamera() function. With this array camera we can render each of those camera views to a quilt using the From a use-facing API perspective I think that should look like this: const renderer = new WebGLRenderer();
const pathtracer = new WebGLPathTracer( renderer );
renderer.xr.enabled = true;
renderer.xr.addEventListener( 'sessionstart', () => {
// need to wait for the xr camera to update
requestAnimationFrame( () => {
// use the array cameras in the path tracing renderer
pathtracer.setCamera( renderer.xr.getCamera() );
} );
} ); And then this is the section in the Quilt renderer that iterates over camera views and renders the sub tiles via the pathtracer. Keep in mind that that Quilt renderer was performing all camera perspective generation etc itself rather than using the ArrayCamera. I only realized later that using the ArrayCamera would be the better way to handle this, so I suggest it here. For your use you'll be able to generate an Array camera with the necessary camera perspectives yourself and pass it into the renderer, then interleave the images. Hopefully that's helpful - let me know if that's what you were looking for. |
Thank you for your response. I think I have enough information to adapt this project to support any 3D light field display. |
Is your feature request related to a problem? Please describe.
I'm working on adapting three-gpu-pathtracer for light field rendering to support 3D displays. The traditional approach requires rendering multiple views and then sampling pixels, which is computationally expensive for path tracing. For example, rendering a 9x9 light field would require 81 separate path-traced views.
Describe the solution you'd like
Instead of rendering multiple views, I'd like to modify the path tracer to directly compute the color for specific light rays in the light field. This would mean:
This approach would be more efficient than the current multiple-view rendering method and would better suit the needs of 3D display systems that require light field data.
Would it be possible to add this functionality or provide guidance on how to best modify the current implementation for this purpose?
The text was updated successfully, but these errors were encountered: