Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Nerfcapture (app output) as an export format #104

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

oseiskar
Copy link
Member

@oseiskar oseiskar commented Dec 6, 2023

Also quick-fix an issue with aligned depth map resolution (oversized images).

Can be run on Spectacular Rec output as

python replay_to_nerf.py /PATH/TO/INPUT/spectacular-rec_XYZ \
    --format=nerfcapture --fast --image_format=png --device_preset=ios-tof --key_frame_distance=0.0001 \
    /PATH/TO/OUTPUT/FOLDER/nerfcapture-XYZ

to produce output in the same format as the Nerfcapture app (more or less the same as Instant NGP input?) as a work-around to this jc211/NeRFCapture#10 (comment) . This format seems to also work as an input to SplaTAM (edit: but the depth scale is probably still wrong)

Also quick-fix an issue with aligned depth map resolution (oversized images)
@oseiskar oseiskar requested a review from Bercon December 6, 2023 20:48
@oseiskar oseiskar marked this pull request as draft December 7, 2023 15:17
@dlazares
Copy link

@oseiskar have you run this on the SplatTAM repo to see if depth scale is right or wrong? any plans to add the depth scale in this PR or follow-up?

@oseiskar
Copy link
Member Author

oseiskar commented Jan 19, 2024

This seemed to work on a certain early version of the SplaTAM code, but the depth scale here does not likely match what SplaTAM assumed from Nerfcapture (note that purely RGB-D based methods like SplaTAM may work nevertheless, the scale of the reconstruction is then just wrong). In that version of the SplaTAM code, it was also possible to set the depth scale in SplaTAM configuration files. The depth scale produced by the Spectacular Rec app is 0.001 (depth in millimeters) and it should be possible to configure SplaTAM to use that scale.

@dlazares
Copy link

@oseiskar i have it mostly working but yeah seems the depth scale is off. what would be the right value? I'm confused as hell by the SplatTam & NerfCapture setup currently.
What would be the right value here?

Linking this for the explanation, even thought I didn't quite get it...
spla-tam/SplaTAM#7

@dlazares
Copy link

dlazares commented Jan 19, 2024

I got my best results experimentally with a png_depth_scale of "1000" but that doesn't really quite make sense to me because the lidar range is supposed to be 5m and I'm logging values before preprocess_depth as "MAX DEPTH 16327" so with that depth scale, it would return 16.327 meters?

@oseiskar
Copy link
Member Author

I got my best results experimentally with a png_depth_scale of "1000" but that doesn't really quite make sense to me because the lidar range is supposed to be 5m and I'm logging values before preprocess_depth as "MAX DEPTH 16327" so with that depth scale, it would return 16.327 meters?

That sounds expected. Note that even though the range of the physical sensor is reportedly 5m, this does not mean that the depth map could not contain larger values. Our iPhone tests also regularl show values between 10 and 20m.

The true resolution of the iPhone "LiDAR" sensor is something like 25 x 25 pixels, and most of the depth map is constructed by a real-time image processing algorithm that segments the RGB image, combines it with the depth data and fills in the gaps, with varying levels of success.

In our quick tests, the SplatTAM algorithm worked fine for some recordings, but did not seem particularly robust or accurate in general.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants