Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

get min pixel resolution of segmentations as hyper prameter rather than hardcoding #82

Open
SajjadPSavoji opened this issue Nov 20, 2024 · 1 comment

Comments

@SajjadPSavoji
Copy link

SajjadPSavoji commented Nov 20, 2024

in SAM-6D/Pose_Estimation_Model/run_inference_custom.py: line 205 object proposals with resolution less than 32 pixel is filtered out. It would be useful to get this hyper parameter from the args rather than hard coding. With occluded objects, even given the right segmentation mask, the model would fail to predict the final pose as it will ignore the mask. See below.

if np.sum(mask) > 32: bbox = get_bbox(mask) y1, y2, x1, x2 = bbox else: continue

I'm wondering if the same filtering is applied during training or not ?

@SajjadPSavoji
Copy link
Author

It seems like they perform the same kind of filtering during testing on BOP but for that you can change the resolution from SAM-6D/Pose_Estimation_Model/config/base.yaml: test_dataset: minimum_n_point.

The actual part of code that performs the filtering is at SAM-6D/Pose_Estimation_Model/provider/bop_test_dataset.py: line 121 see below.

if np.sum(mask) > self.minimum_n_point: bbox = get_bbox(mask) y1, y2, x1, x2 = bbox

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant