You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm glad to see that you just open-sourced the code for Sparse NVS task, but I'm curious if you can provide the code of quantitative evaluation, i.e., evaluating the PSNR, SSIM, and LPIPS of the rendered image on specific test views (like table 2 in the paper). Should I align the point cloud of dust3r with the known camera poses of training views in sparse NVS task?
The text was updated successfully, but these errors were encountered:
@Drexubery hi, I try to use images from diffusion video to train 3dgs with single view input mode and sparse view input. I found this naive way generate blur and floating point. could you let me know the 3dgs better result will be like of viewcarfter video ? thanks !
Thanks for your wonderful project !
I'm glad to see that you just open-sourced the code for Sparse NVS task, but I'm curious if you can provide the code of quantitative evaluation, i.e., evaluating the PSNR, SSIM, and LPIPS of the rendered image on specific test views (like table 2 in the paper). Should I align the point cloud of dust3r with the known camera poses of training views in sparse NVS task?
The text was updated successfully, but these errors were encountered: