You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
in SAM-6D/Pose_Estimation_Model/run_inference_custom.py: line 205 object proposals with resolution less than 32 pixel is filtered out. It would be useful to get this hyper parameter from the args rather than hard coding. With occluded objects, even given the right segmentation mask, the model would fail to predict the final pose as it will ignore the mask. See below.
It seems like they perform the same kind of filtering during testing on BOP but for that you can change the resolution from SAM-6D/Pose_Estimation_Model/config/base.yaml: test_dataset: minimum_n_point.
The actual part of code that performs the filtering is at SAM-6D/Pose_Estimation_Model/provider/bop_test_dataset.py: line 121 see below.
in
SAM-6D/Pose_Estimation_Model/run_inference_custom.py: line 205
object proposals with resolution less than 32 pixel is filtered out. It would be useful to get this hyper parameter from the args rather than hard coding. With occluded objects, even given the right segmentation mask, the model would fail to predict the final pose as it will ignore the mask. See below.if np.sum(mask) > 32: bbox = get_bbox(mask) y1, y2, x1, x2 = bbox else: continue
I'm wondering if the same filtering is applied during training or not ?
The text was updated successfully, but these errors were encountered: