Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Post-Hoc Depth Computation #481

Open
2 tasks done
suraj-nair-1 opened this issue May 5, 2022 · 3 comments
Open
2 tasks done

Post-Hoc Depth Computation #481

suraj-nair-1 opened this issue May 5, 2022 · 3 comments

Comments

@suraj-nair-1
Copy link

Preliminary Checks

  • This issue is not a duplicate. Before opening a new issue, please search existing issues.
  • This issue is not a question, bug report, or anything other than a feature request directly related to this project.

Proposal

Hello,

We have a Zed 2 camera that will be moving around with a robot and collecting data. The onboard computer we have with the robot does not have a GPU/CUDA, hence we are not able to use the ZED SDK directly online during data collection. However, we would like to capture the raw RGB stereo frames, then feed them through the ZED SDK later (on a different GPU enabled machine) for post-hoc computation of the depth.

It does not appear that the current ZED SDK api supports this (at least for python), where it looks like depth/pointclouds can only be captured live from the camera. Is there functionality to pass in previously captured RGB images and camera intrinsics and get depth? While I understand that there are off-the-shelf models that could be used for this, I would like to use the depth prediction models with the ZED SDK as they are likely the best tuned for the hardware.

Perhaps this functionality exists and I missed it? If such functionality could be added that would be much appreciated. Or if you have any pointers on how to go about modifying the ZED SDK myself to support this that would also be very helpful.

Thank you!

-- Suraj Nair

Use-Case

No response

Anything else?

No response

@obraun-sl
Copy link
Member

You can use ZED Explorer tool (which does not require NVIDIA GPU) to record SVO files that can be used directly as an input of the ZED SDK.
To install ZED Explorer on a non-NVIDIA GPU, you can install the ZED SDK and force to continue when the installer says there is no NVIDIA GPU installed.
You will then find ZED Explorer in the /tools/

@suraj-nair-1
Copy link
Author

Hello,

Thank you! I was able to use the ZED Explorer tool to record SVO files then read them from the ZED SDK. However, it would be preferable to collect the stereo frames programmatically, then feed them to the ZED SDK later.

Is there any way to directly pass RGB frames + camera intrinsics to ZED to compute the depth frames? Or perhaps a way to convert a sequence of frames to an SVO file? Ideally what I would do is read the frames using OpenCV then pass those frames to ZED post-hoc.

@obraun-sl
Copy link
Member

Hi ,
Sorry for the late reply. This is currently not possible to ingest custom images into the zed sdk.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

No branches or pull requests

2 participants