Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

problems about the evaluation #2

Open
chaoer opened this issue Jul 26, 2020 · 2 comments
Open

problems about the evaluation #2

chaoer opened this issue Jul 26, 2020 · 2 comments

Comments

@chaoer
Copy link

chaoer commented Jul 26, 2020

Hi,

Thanks for your great work.

When I tried to test on Mattport3D dataset with config "DM-LRN" and config_file "DM-LRN_efficientnet-b4_pepper.yaml", I can't get similair results as illustrated in README.

The testing environment was created using conda as you wrote. The tesing code is the third line you wrote in Evaluation part.

So I am wondering is there anything I missed. OR are there some tricks should I take when conduct testing ?

P.S.
Could you please tell me what's the path of the images used for illustration? Thus, I can test on them and to figure out whether I had created the testing environment correctly.

@YunfanZhang42
Copy link

YunfanZhang42 commented Aug 2, 2020

Hi,

Thank you for your inspiring work! I downloaded your code and pretrained model weights and tried to evaluate the prediction results on the same Matterport test split as "Deep Depth Completion of a Single RGB-D Image" by Yinda Zhang et al, but unfortunately was unable to reproduce the results in the paper. I tried all models from DM-LRN b2 to DM-LRN b4 but none of results matched what's mentioned in the paper. I am wondering what might happened here - does the depth images need to be preprocessed in a certain way, or is there a different set of model weights available? Thanks in advance for your help!

Best,
Yunfan

@tmanh
Copy link

tmanh commented Oct 27, 2020

Hi, me too. I also tried all the models and the results were not even close to what stated in the paper. I think they had updated something right before they uploaded the code to GitHub (like refactor). And, they accidentally screwed all the settings up. Or, might be, they did some preprocessing but forgot to mention.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants