-
Notifications
You must be signed in to change notification settings - Fork 178
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inconsistent results on Online evaluation and Test (might caused by benchmark crop operation) #125
Comments
Hi Xualong Yu,Thanks for your kind and thoughtful comment.We have already noticed the issue and just started to look into it because of your comment.I will let you know the progress if we have some to share.BTW, you are so kind.Thanks again.Best regards,Jin Han
-------- 원본 이메일 --------발신: Xuanlong Yu ***@***.***> 날짜: 22/1/11 오후 8:37 (GMT+09:00) 받은 사람: cleinc/bts ***@***.***> 참조: Subscribed ***@***.***> 제목: [cleinc/bts] Inconsistent results on Online evaluation and Test (might caused by benchmark crop operation) (Issue #125)
Hi there, I really like this project, it is very complete and concret.
However, I found that when you do the online eval and test, it might have some issues with the benchmark_crop operation and this makes the online eval result different from the result made by the test code. The result from online eval is much better.
Online eval: benchmark crop for both image and gt in dataloader.py, the effective areas of depth_pred and gt are consistent. While the benchmark crop operation in for example bts_main.py is useless.
Test: Only a benchmark crop for the image in dataloader.py, in test.py, you put depth_pred on a zero map as large as gt, so the effective areas of depth_pred and gt are inconsistent (although the size is the same).
I notice that in Adabins GitHub repo, they used and referred to your codes, and they remove this ambiguity. To be honest I think your Eigen-split result is underestimated.
Looking forward to your response.
—Reply to this email directly, view it on GitHub, or unsubscribe.Triage notifications on the go with GitHub Mobile for iOS or Android.
You are receiving this because you are subscribed to this thread.Message ID: ***@***.***>
|
Thank you for your quick reply!
I wanted to say: Compared to Adabins repo, in their evaluation code, evaluate.py, they do the evaluation directly using 'mode = 'online_eval'' see line 209. So in dataloader.py, both gt and RGB image (as well as pred_depth) are cropped with benchmark_crop, as you did in bts_main.py - online_eval. I hope I didn't make mistakes. |
Hi, I also found this problem and I find the solution. With "Online eval", ground truth map are cropped by kb_crop in line 175 in bts_dataloader.py , but in testing mode, ground truth map are not cropped. In order to keep consistent between Online eval and Testing result, we can add crop code in eval_with_pngs.py between line 121, 122 like:
Thus, the result after execute eval_with_pngs.py will same as Online eval result. |
Hi there, I really like this project, it is very complete and concret.
However, I found that when you do the online eval and test, it might have some issues with the benchmark_crop operation and this makes the online eval result different from the result made by the test code. The result from online eval is much better.
Online eval: benchmark crop for both image and gt in dataloader.py, the effective areas of depth_pred and gt are consistent. While the benchmark crop operation in for example bts_main.py is useless.
Test: Only a benchmark crop for the image in dataloader.py, in test.py, you put depth_pred on a zero map as large as gt, so the effective areas of depth_pred and gt are inconsistent (although the size is the same).
I notice that in Adabins GitHub repo, they used and referred to your codes, and they remove this ambiguity. To be honest I think your Eigen-split result is underestimated.
Looking forward to your response.
The text was updated successfully, but these errors were encountered: