Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How generate depth images in Unity? #6198

Open
xiaolijz opened this issue Mar 24, 2025 · 4 comments
Open

How generate depth images in Unity? #6198

xiaolijz opened this issue Mar 24, 2025 · 4 comments
Labels
request Issue contains a feature request.

Comments

@xiaolijz
Copy link

I really don't know how to generate depth images in Unity URP.

@xiaolijz xiaolijz added the request Issue contains a feature request. label Mar 24, 2025
@IliaTrofimov
Copy link

Hello! I have just came up with same question. I wanted to create depth map input for my CNN inside agent. This is what I managed to do.

  1. Create URP asset and one URP renderer. I have actually created 2 renderers so I can generate depth map and normal image. Add your renderer (or two) to URP asset. Make sure that renderer for normal camera is marked as default to prevent depth effects on all cameras including editor camera.
Image Image
  1. Go to Project Settings -> Graphics and select there your URP asset.
Image
  1. Create fullscreen shader graph. This is basically an post effect and will render image on top of your camera output. You can play with nodes and parameters as you want. Copy my graph to create depth map.
Image Image
  1. Select URP renderer for your shader and add "Full Screen Pass Renderer Feature". I have already added it so it is displaying in list. Then expand your shader graph asset and drag created material into Pass material field inside renderer. Select DrawProcedural (0).
Image
  1. Select any camera that will capture depth map. Set your URP renderer in renderer field. This will apply all post effects that you have added to renderer.
Image
  1. Now you can do whatever you want with this camera. For example if you are using two renderers like me than you can add an overlay depth map.
Image
  1. To use this depth camera as ML Agents sensor you can create CameraSensor with your camera as input parameter or you can use RenderTextureSensor. RenderTextureSensor will require you to create render texture and your camera will have to output image to this texture.

P.S. I haven't tested actual machine learning with this setup, but I think everything will work fine. You can use my project as example, but it is not finished yet.

@Javaec
Copy link

Javaec commented Apr 2, 2025

Hello! I have just came up with same question. I wanted to create depth map input for my CNN inside agent. This is what I managed to do.

Great instructions, thanks!

Each camera in the scene multiplies the number of Draw Calls.
How can you train 10-40 agents simultaneously if each one has a camera?

@IliaTrofimov
Copy link

@Javaec, I think you should use render texture with low resolution and corresponding sensor instead of camera sensor. As far as I know, this approach will generate relatively light images (compared to full screen rendering). CNN work fine with small images if your task is simple enough. Maybe you can play with texture colour formats to make images even lighter.

Using full screen shader is inefficient, but I don’t have other solutions for URP. Maybe built-in render pipeline can offer different ways to get depth maps.

@IliaTrofimov
Copy link

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
request Issue contains a feature request.
Projects
None yet
Development

No branches or pull requests

3 participants