We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
(sam6d) js@js-UbtSer:~/Documents/SAM-6D/SAM-6D-main/SAM-6D/Pose_Estimation_Model$ python run_inference_custom.py --output_dir $OUTPUT_DIR --cad_path $CAD_PATH --rgb_path $RGB_PATH --depth_path $DEPTH_PATH --cam_path $CAMERA_PATH --seg_path $SEG_PATH set CUDA_VISIBLE_DEVICES as 0 => creating model ... load pre-trained checkpoint from: checkpoints/mae_pretrain_vit_base.pth => extracting templates ... Traceback (most recent call last): File "/home/js/Documents/SAM-6D/SAM-6D-main/SAM-6D/Pose_Estimation_Model/run_inference_custom.py", line 277, in all_tem_pts, all_tem_feat = model.feature_extraction.get_obj_feats(all_tem, all_tem_pts, all_tem_choose) File "/home/js/Documents/SAM-6D/SAM-6D-main/SAM-6D/Pose_Estimation_Model/../Pose_Estimation_Model/model/feature_extraction.py", line 181, in get_obj_feats return sample_pts_feats(tem_pts, tem_feat, npoint) File "/home/js/Documents/SAM-6D/SAM-6D-main/SAM-6D/Pose_Estimation_Model/../Pose_Estimation_Model/utils/model_utils.py", line 58, in sample_pts_feats sample_idx = furthest_point_sample(pts, npoint) File "/home/js/anaconda3/envs/sam6d/lib/python3.9/site-packages/torch/autograd/function.py", line 598, in apply return super().apply(*args, **kwargs) # type: ignore[misc] File "/home/js/Documents/SAM-6D/SAM-6D-main/SAM-6D/Pose_Estimation_Model/model/pointnet2/pointnet2_utils.py", line 71, in forward fps_inds = _ext.furthest_point_sampling(xyz.contiguous(), npoint) RuntimeError: points must be a contiguous tensor
The text was updated successfully, but these errors were encountered:
Hello, I am having the same error as you, can you please solve it.
Sorry, something went wrong.
No branches or pull requests
(sam6d) js@js-UbtSer:~/Documents/SAM-6D/SAM-6D-main/SAM-6D/Pose_Estimation_Model$ python run_inference_custom.py --output_dir $OUTPUT_DIR --cad_path $CAD_PATH --rgb_path $RGB_PATH --depth_path $DEPTH_PATH --cam_path $CAMERA_PATH --seg_path $SEG_PATH
set CUDA_VISIBLE_DEVICES as 0
=> creating model ...
load pre-trained checkpoint from: checkpoints/mae_pretrain_vit_base.pth
=> extracting templates ...
Traceback (most recent call last):
File "/home/js/Documents/SAM-6D/SAM-6D-main/SAM-6D/Pose_Estimation_Model/run_inference_custom.py", line 277, in
all_tem_pts, all_tem_feat = model.feature_extraction.get_obj_feats(all_tem, all_tem_pts, all_tem_choose)
File "/home/js/Documents/SAM-6D/SAM-6D-main/SAM-6D/Pose_Estimation_Model/../Pose_Estimation_Model/model/feature_extraction.py", line 181, in get_obj_feats
return sample_pts_feats(tem_pts, tem_feat, npoint)
File "/home/js/Documents/SAM-6D/SAM-6D-main/SAM-6D/Pose_Estimation_Model/../Pose_Estimation_Model/utils/model_utils.py", line 58, in sample_pts_feats
sample_idx = furthest_point_sample(pts, npoint)
File "/home/js/anaconda3/envs/sam6d/lib/python3.9/site-packages/torch/autograd/function.py", line 598, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/home/js/Documents/SAM-6D/SAM-6D-main/SAM-6D/Pose_Estimation_Model/model/pointnet2/pointnet2_utils.py", line 71, in forward
fps_inds = _ext.furthest_point_sampling(xyz.contiguous(), npoint)
RuntimeError: points must be a contiguous tensor
The text was updated successfully, but these errors were encountered: