-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Evaluation results for coco val #1
Comments
Can you try this with BATCH_SIZE = 1? I realize it will take longer, but BATCH_SIZE = 8 hasn't been tested. Assuming that works, I can try and see what the problem with 8 is (generally, Detectron2 discourages using a batch size > 1 during inference and I probably assumed this somewhere in the code). |
Hi I find the BATCH_SIZE parameter has no effect on the inference. Detectron2 always sets batch size as 1 during inference. BTW, could you please provide the instruction to build your diff_ras module. I am not sure whether I build this module in a correct way. I just execute: |
By executing With respect to the eval results, it looks like my PyTorch had issues saving it (perhaps too old of a version) and the boundary head weights were not loaded. Please re-download the COCO model, and I think it should be fine. |
Thanks for the updated model! Yes, now it is fine. Sorry I have another question: to check whether I compiled the diff_ras module correctly, I ran run-rasterizer-tests.py and the logs look like: The GT Rasterization agreement is around 0.6, Rasterized agreement (tau) and Gradient agreement (tau) are around 1. |
Yes, exactly.
…________________________________
From: Yuanwen Yue ***@***.***>
Sent: Wednesday, July 13, 2022 4:36:39 PM
To: mlpc-ucsd/BoundaryFormer ***@***.***>
Cc: Justin Lazarow ***@***.***>; Comment ***@***.***>
Subject: Re: [mlpc-ucsd/BoundaryFormer] Evaluation results for coco val (Issue #1)
Thanks for the updated model. Yes, now it is fine. Sorry I have another question: to check whether I compile the diff_ras module correctly, I ran run-rasterizer-tests.py<https://github.com/mlpc-ucsd/BoundaryFormer/blob/boundary_former/projects/BoundaryFormer/run-rasterizer-tests.py> and the logs look like:
[image]<https://user-images.githubusercontent.com/40747438/178854641-45f8f19c-a053-42d3-a153-d3aa9b850ad2.png>
Does this mean that the rasterizer can work properly?
—
Reply to this email directly, view it on GitHub<#1 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AAAFWLTW2DIVTRFHNAL4G43VT5HIPANCNFSM52G73S2A>.
You are receiving this because you commented.Message ID: ***@***.***>
|
Alright, thanks! |
May I ask what these three results represent?thank you! |
Hi Thanks for your great work! I loaded your pretrained model on COCO, and ran the inference. The results are as follows:
[06/29 21:55:25 d2.engine.defaults]: Evaluation results for coco_2017_val in csv format:
[06/29 21:55:25 d2.evaluation.testing]: copypaste: Task: bbox
[06/29 21:55:25 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl
[06/29 21:55:25 d2.evaluation.testing]: copypaste: 38.4919,59.4363,41.7748,22.3543,41.2834,50.4875
[06/29 21:55:25 d2.evaluation.testing]: copypaste: Task: segm
[06/29 21:55:25 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl
[06/29 21:55:25 d2.evaluation.testing]: copypaste: 7.8351,29.3128,1.0689,4.7795,8.2153,10.8173
You can see the AP results of segm is much lower than the number in your paper. I also visualize the results using tools/visualize_json_results.py . The visualized results are also not good. I have followed your default hyperparameters except that I set num-gpus as 1 and BATCH_SIZE = 8. Could you please give some hints on what may be wrong? Thanks in advance!
The text was updated successfully, but these errors were encountered: