-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about evaluation on 3DPW #56
Comments
Hi,
I don’t understand how camera parameters are used in the evaluation
2022년 11월 25일 (금) 오전 10:10, Mimi Liao ***@***.***>님이 작성:
Hi,
I noticed that in your code, you evaluate the final 3D vertices results
(MPVPE, MPJPE) after adding predicted camera parameters. Of course, the gt
camera params are also added to the gt smpl vertices before evaluation.
However, I notice in other papers ( like METRO), the 3D vertices are
evaluated without camera predictions, which means the accuracy of camera
predictions will not affect the final results.
Are these differences in the evaluation process make it incomparable
between you and others?
Thanks.
—
Reply to this email directly, view it on GitHub
<#56>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AHNGKWYORMSTAZLCZL34RE3WKAGZPANCNFSM6AAAAAASK3ANUY>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
--
Best regards,
Hongsuk Choi
https://hongsukchoi.github.io
|
In Pose2Mesh_RELEASE/data/PW3D/dataset.py Line 93 in 7f24836
|
Hi,
GT should be in the camera coordinate system. Other methods assuming a
single image input will do the same. Where are the predicted camera
parameters used?
2022년 11월 26일 (토) 오전 1:13, Mimi Liao ***@***.***>님이 작성:
In
https://github.com/hongsukchoi/Pose2Mesh_RELEASE/blob/7f24836c36dfdd52be6735505f44af11ec97e666/data/PW3D/dataset.py#L93,
the ground truth 3DPW vertices are generated with camera trans.
—
Reply to this email directly, view it on GitHub
<#56 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AHNGKWYI6ZDOFCJ5MONJUPTWKDQUHANCNFSM6AAAAAASK3ANUY>
.
You are receiving this because you commented.Message ID:
***@***.***>
--
Best regards,
Hongsuk Choi
https://hongsukchoi.github.io
|
@hongsukchoi |
Hi,
Do you know the camera coordinate system, the world coordinate system, and
the smpl coordinate system?
Best regards,
Hongsuk Choi
https://hongsukchoi.github.io/
2022년 11월 28일 (월) 오전 1:19, Mimi Liao ***@***.***>님이 작성:
… @hongsukchoi <https://github.com/hongsukchoi>
Sorry, I found that the predicted camera parameter I mentioned is in your
another repo:
https://github.com/hongsukchoi/3DCrowdNet_RELEASE/blob/6e773064c8d6950b382f66d76b615aada4f2594b/main/model.py#L65
Also, what do you mean in this sentence: "Other methods assuming a
single image input will do the same?"
Doesn't everybody use the same 3DPW dataset?
Thank you so much!
—
Reply to this email directly, view it on GitHub
<#56 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AHNGKW2L2ESYUICCTQJU6RLWKOCZXANCNFSM6AAAAAASK3ANUY>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Hi, Thank you so much. |
To conclude, you can compare Pose2Mesh with MeTro or any other methods. Parsing on GT 3D meshes is The SMPL coordinate system is the world coordinate system, where the template SMPL mesh lies. |
Hi,
I noticed that in your code, you evaluate the final 3D vertices results (MPVPE, MPJPE) after adding predicted camera parameters. Of course, the gt camera params are also added to the gt smpl vertices before evaluation. However, I notice in other papers ( like METRO), the 3D vertices are evaluated without camera predictions, which means the accuracy of camera predictions will not affect the final results.
Are these differences in the evaluation process make it incomparable between you and others?
Thanks.
The text was updated successfully, but these errors were encountered: