Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

can not reproduce same result reported in paper #81

Open
MengHao666 opened this issue Sep 11, 2021 · 9 comments
Open

can not reproduce same result reported in paper #81

MengHao666 opened this issue Sep 11, 2021 · 9 comments

Comments

@MengHao666
Copy link

MengHao666 commented Sep 11, 2021

Hi, I train the model following your configuration and code completely,but get much better result than in your paper

Specifically, we do experiment on Machine_annot subset, but got 10.52/15.99 for SH/IH MPJPE, which is much better than result 12.56/18.59. I am so confused about such result, as I need to compare with yours. How should I do?

image

@MengHao666
Copy link
Author

MengHao666 commented Sep 12, 2021

Hi, when I use the 15 epoch checkpoint of Train(M) and test on Test(M), but got 10.62/16.21 for SH/IH MPJPE, which is still much better than result 12.56/18.59. I think I need to compare with the result reported in the paper , so I am now trying to find the result which checkpoint could be most close to 12.56/18.59.

Also, could you provide me with your checkpoint that could reproduce 12.56/18.59 of MPJPE. Actually, I also need to compare with the result of 2 situations (SH only or SH+IH) of train set on machine_annot subset, i.e. M not H+M, just look slike following picture. Could u provide me these checkpoints? so we could have a fair comparision.

image

@mks0601
Copy link
Collaborator

mks0601 commented Sep 12, 2021

That is weird.. I haven't changed the codes and datasets after writing this paper much.
Anyway, why not just follow numbers reported in the paper? Do you need some checkpoints?

@MengHao666
Copy link
Author

MengHao666 commented Sep 12, 2021

That is weird.. I haven't changed the codes and datasets after writing this paper much.
Anyway, why not just follow numbers reported in the paper? Do you need some checkpoints?

I change code of following line to trans_test = gt # gt, rootnet`, does it have some effect?

trans_test = 'rootnet' # gt, rootnet

@mks0601
Copy link
Collaborator

mks0601 commented Sep 12, 2021

Surely it affects much. It uses GT root joint depth during inference. Please set it to rootnet.

@MengHao666
Copy link
Author

Surely it affects much. It uses GT root joint depth during inference. Please set it to rootnet.

I am so sorry that when I set this parameter to rootnet, it get very bad result 86.15/69.97 , I think you may forget something. What should I do? Could u give me the chekpoint to reproduce result of 12.56/18.59? I am so confused now.

@mks0601
Copy link
Collaborator

mks0601 commented Sep 12, 2021

You'd better download the rootnet's output again. I fixed some bugs several months ago.

@MengHao666
Copy link
Author

You'd better download the rootnet's output again. I fixed some bugs several months ago.

I will try again.

@MengHao666
Copy link
Author

You'd better download the rootnet's output again. I fixed some bugs several months ago.

I am sorry to see that in your upatdated files,you didn't distinguish which annot_subset the rootnet's output belongs to. And all my experiment are doing on machine_annot annot_subset.

@mks0601
Copy link
Collaborator

mks0601 commented Sep 12, 2021

Those files can be used across all subsets.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants