Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Direct Light Field Rendering for 3D Displays without Multiple View Sampling #679

Open
Royalvice opened this issue Jan 16, 2025 · 6 comments
Labels
enhancement New feature or request

Comments

@Royalvice
Copy link

Is your feature request related to a problem? Please describe.
I'm working on adapting three-gpu-pathtracer for light field rendering to support 3D displays. The traditional approach requires rendering multiple views and then sampling pixels, which is computationally expensive for path tracing. For example, rendering a 9x9 light field would require 81 separate path-traced views.

Describe the solution you'd like
Instead of rendering multiple views, I'd like to modify the path tracer to directly compute the color for specific light rays in the light field. This would mean:

  1. Extending the ray tracing system to support light field parameterization (e.g., two-plane parameterization)
  2. Adding capability to directly trace rays from specific light field coordinates (s,t,u,v)
  3. Implementing a more efficient sampling strategy that considers the 4D structure of light fields
  4. Potentially adding support for light field-specific optimizations (e.g., coherence between nearby rays)

This approach would be more efficient than the current multiple-view rendering method and would better suit the needs of 3D display systems that require light field data.

Would it be possible to add this functionality or provide guidance on how to best modify the current implementation for this purpose?

@Royalvice Royalvice added the enhancement New feature or request label Jan 16, 2025
@gkjohnson
Copy link
Owner

Hello! Do you have a reference for what the final image looks like? Is it interleaved? And what the target device you're trying to render to is? This project previously supported the Looking Glass display (render X number of subframes to a single target render buffer) but it had to be removed after some refactoring so I'm trying to get a sense for how this differs and how dramatic some of these changes might be.

I'll probably have some more questions about the four line-items but lets start with the above.

@Royalvice
Copy link
Author

Royalvice commented Jan 23, 2025

Hello! Do you have a reference for what the final image looks like? Is it interleaved? And what the target device you're trying to render to is? This project previously supported the Looking Glass display (render X number of subframes to a single target render buffer) but it had to be removed after some refactoring so I'm trying to get a sense for how this differs and how dramatic some of these changes might be.

I'll probably have some more questions about the four line-items but lets start with the above.

Hello! Thank you very much for your patient response!!!

The display device I want to render to works almost exactly like LookingGlass in principle, both using a lenticular unit in front of an LCD screen for light field display. However, LookingGlass is a mature product that requires downloading its Bridge software. You can think of my display as an extended screen that supports custom native development. It just needs to display the correctly interlaced texture, and users will be able to see the correct 3D effect.

The general process for rendering a frame to these light field displays is to first render a QuiltTexture and then interlace it. I’m glad to hear that this project once supported the LKG display. Judging from your description, it should follow the general process (render X number of subframes to a single target render buffer), which is already very helpful to me!!! If possible, could you guide me on how to obtain this version?

I’d like to go a step further: not by rendering a QuiltTexture and then interlacing it, but by directly rendering the interlaced image. This could greatly improve rendering efficiency. Essentially, as long as I can render the color of rays in any given direction, I can directly render the interlaced image.

Below are the two images you need: one is the Quilt, and the other is the interlaced image.

@gkjohnson
Copy link
Owner

I’m glad to hear that this project once supported the LKG display. Judging from your description, it should follow the general process (render X number of subframes to a single target render buffer), which is already very helpful to me!!! If possible, could you guide me on how to obtain this version?

The old LKG demo code is still available here, which uses this old QuiltPathTracingRenderer but is no longer working since the main path tracing renderer was changed over to use the new WebGLPathTracer class. The quilt renderer basically works by calculating each camera view and then rendering rendering each quilt tile iteratively and then rendering each again to add a new sample to each.

not by rendering a QuiltTexture and then interlacing it, but by directly rendering the interlaced image. This could greatly improve rendering efficiency.

Why do you think rendering directly to an interlaced image would be so much faster? Converting from a quilt to an interleaved image will be extremely fast - especially compared to any of the path tracing shader evaluation. Pathtracing already has to be tiled to keep performance up and tendering interleaved would also remove any existing (admittedly still limited) ray coherence between pixels, possibly lowering path tracing performance further.

I think a re-adding support for quilt rendering along with a utility for converting the quilt to an interleaved image would be the most flexible and help reenable support for LKG, as well. I think the best way to do this is to update the "WebGLPathTracer" class to support passing an ArrayCamera in to setCamera which can then be used to define each quilt "tile" that gets rendered to. LKG uses WebXR to perform rendering in WebGL which provides an ArrayCamera in three.js.

@Royalvice
Copy link
Author

Thank you again for your patient response! You’re absolutely right—that’s the issue. I was thinking that many pixels in the Quilt would not end up in the final interlaced image, making them effectively redundant. However, since path tracing is accelerated using a tile-based approach, rendering individual rays separately would degrade performance.
I’m excited to implement support for light field displays and submit a PR. If possible, could you provide more detailed guidance on how to proceed?
Additionally, I noticed that there is still a QuiltPathTracingRenderer in the main branch, but it probably isn’t usable, right? However, in the old branch https://github.com/gkjohnson/three-gpu-pathtracer/tree/6e5a452bb1ecb2853dbaa6d5a57682b814df504c, it seems like it could run. Is that correct?

@gkjohnson
Copy link
Owner

gkjohnson commented Jan 24, 2025

I was thinking that many pixels in the Quilt would not end up in the final interlaced image, making them effectively redundant

Every pixel in every "quilt" tile should be used in the final interleaved image - I don't believe there should be any wasted work in that respect.

Additionally, I noticed that there is still a QuiltPathTracingRenderer in the main branch, but it probably isn’t usable, right? However, in the old branch https://github.com/gkjohnson/three-gpu-pathtracer/tree/6e5a452bb1ecb2853dbaa6d5a57682b814df504c, it seems like it could run. Is that correct?

I don't expect that the QuiltRenderer will currently work due to other changes but I left it here as reference so it could be readded at some later point. v0.20.0 is the last version before the WebGLPathTracer class refactor happened so it should work at that release (there's a link in the README, though you'll have to pull and run to try it). You can see some of the renders at the this blocks.glass profile. And some examples of the old quilt images:

If possible, could you provide more detailed guidance on how to proceed?

Regarding the LKG logic, the hologram renderer uses WebXR to define the rendering behavior for multiple cameras in three.js. It also positions the cameras and camera quilt tiles for rendering based on the LKG parameters (see here). This XR camera is an ArrayCamera and can be retrieved using WebXRManager.getCamera() function. With this array camera we can render each of those camera views to a quilt using the ArrayCamera.cameras[ n ].viewport property to determine where in the image that quilt tile belongs.

From a use-facing API perspective I think that should look like this:

const renderer = new WebGLRenderer();
const pathtracer = new WebGLPathTracer( renderer );

renderer.xr.enabled = true;
renderer.xr.addEventListener( 'sessionstart', () => {

  // need to wait for the xr camera to update
  requestAnimationFrame( () => {

    // use the array cameras in the path tracing renderer
    pathtracer.setCamera( renderer.xr.getCamera() );

  } );

} );

And then this is the section in the Quilt renderer that iterates over camera views and renders the sub tiles via the pathtracer. Keep in mind that that Quilt renderer was performing all camera perspective generation etc itself rather than using the ArrayCamera. I only realized later that using the ArrayCamera would be the better way to handle this, so I suggest it here.

For your use you'll be able to generate an Array camera with the necessary camera perspectives yourself and pass it into the renderer, then interleave the images.

Hopefully that's helpful - let me know if that's what you were looking for.

@Royalvice
Copy link
Author

Thank you for your response. I think I have enough information to adapt this project to support any 3D light field display.
However, I must responsibly point out that directly rendering the Quilt image does indeed involve a lot of redundant rendering computation. First, the resolution of the interlaced image must match the resolution of the underlying LCD screen, while the number of pixels in the Quilt is generally 3-4 times this resolution. Although almost all color information in the Quilt is used during the final interlacing and texture sampling, this is not necessary nor optimal.
You can refer to the following papers:
Efficient rendering for light field displays using tailored projective mappings
DirectL: Efficient Radiance Fields Rendering for 3D Light Field Displays
Virtual stereo content rendering technology review for light-field display

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants