Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to reproduce evaluation results from the paper #71

Closed
Nekroz05 opened this issue Sep 9, 2024 · 2 comments
Closed

Unable to reproduce evaluation results from the paper #71

Nekroz05 opened this issue Sep 9, 2024 · 2 comments

Comments

@Nekroz05
Copy link

Nekroz05 commented Sep 9, 2024

I'm trying to reproduce the evaluation results presented in the paper but I'm encountering difficulty. I'd appreciate guidance on how to accurately replicate the results.

        model = UniDepthV1.from_pretrained("lpiccinelli/unidepth-v1-vitl14")

        # Move to CUDA, if any
        device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
        self.model = model.to(device)

I'm evaluating on the KITTI eigen Split and NYUv2 Official Split
I'm using both v1 and v2 with vitl14
i'm using intrinsic matrix for inference

Screenshot from 2024-09-09 14-48-49

Question

  1. What specific configuration was used to achieve the results reported in the paper?
  2. Are there any additional parameters or settings I should be aware of?
  3. Is there a specific evaluation script I should be using?
  4. Are there any post-processing steps applied to the depth predictions?

My Current Result
although not way off, it isn't exactly the same either, especially on RMSE

NYUv2
image

KITTI
image

@lpiccinelli-eth
Copy link
Owner

Dear @Nekroz05 , sorry for the late reply. You can check the validation.py file under unidepth/utils in the training branch (PR #76) and also base_dataset.py accumulate_metrics method to see how it is carried out.

However, I suspect there may be some rescaling of both prediction or GT since the most affected metric is RMSE, while the metrics where the scale cancels out are pretty consistent.

@Nekroz05
Copy link
Author

I'll be sure to check it out, Thank you for your response!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants