You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This issue is not a duplicate. Before opening a new issue, please search existing issues.
This issue is not a question, bug report, or anything other than a feature request directly related to this project.
Proposal
Hello,
We have a Zed 2 camera that will be moving around with a robot and collecting data. The onboard computer we have with the robot does not have a GPU/CUDA, hence we are not able to use the ZED SDK directly online during data collection. However, we would like to capture the raw RGB stereo frames, then feed them through the ZED SDK later (on a different GPU enabled machine) for post-hoc computation of the depth.
It does not appear that the current ZED SDK api supports this (at least for python), where it looks like depth/pointclouds can only be captured live from the camera. Is there functionality to pass in previously captured RGB images and camera intrinsics and get depth? While I understand that there are off-the-shelf models that could be used for this, I would like to use the depth prediction models with the ZED SDK as they are likely the best tuned for the hardware.
Perhaps this functionality exists and I missed it? If such functionality could be added that would be much appreciated. Or if you have any pointers on how to go about modifying the ZED SDK myself to support this that would also be very helpful.
Thank you!
-- Suraj Nair
Use-Case
No response
Anything else?
No response
The text was updated successfully, but these errors were encountered:
You can use ZED Explorer tool (which does not require NVIDIA GPU) to record SVO files that can be used directly as an input of the ZED SDK.
To install ZED Explorer on a non-NVIDIA GPU, you can install the ZED SDK and force to continue when the installer says there is no NVIDIA GPU installed.
You will then find ZED Explorer in the /tools/
Thank you! I was able to use the ZED Explorer tool to record SVO files then read them from the ZED SDK. However, it would be preferable to collect the stereo frames programmatically, then feed them to the ZED SDK later.
Is there any way to directly pass RGB frames + camera intrinsics to ZED to compute the depth frames? Or perhaps a way to convert a sequence of frames to an SVO file? Ideally what I would do is read the frames using OpenCV then pass those frames to ZED post-hoc.
Preliminary Checks
Proposal
Hello,
We have a Zed 2 camera that will be moving around with a robot and collecting data. The onboard computer we have with the robot does not have a GPU/CUDA, hence we are not able to use the ZED SDK directly online during data collection. However, we would like to capture the raw RGB stereo frames, then feed them through the ZED SDK later (on a different GPU enabled machine) for post-hoc computation of the depth.
It does not appear that the current ZED SDK api supports this (at least for python), where it looks like depth/pointclouds can only be captured live from the camera. Is there functionality to pass in previously captured RGB images and camera intrinsics and get depth? While I understand that there are off-the-shelf models that could be used for this, I would like to use the depth prediction models with the ZED SDK as they are likely the best tuned for the hardware.
Perhaps this functionality exists and I missed it? If such functionality could be added that would be much appreciated. Or if you have any pointers on how to go about modifying the ZED SDK myself to support this that would also be very helpful.
Thank you!
-- Suraj Nair
Use-Case
No response
Anything else?
No response
The text was updated successfully, but these errors were encountered: