You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
File "/media/hanxu/Documents/Paper003-underwater-segmentation/mmsegmentation-1.1.2/mmseg/models/segmentors/encoder_decoder.py", line 178, in loss
loss_decode = self._decode_head_forward_train(x, data_samples)
File "/media/hanxu/Documents/Paper003-underwater-segmentation/mmsegmentation-1.1.2/mmseg/models/segmentors/encoder_decoder.py", line 139, in _decode_head_forward_train
loss_decode = self.decode_head.loss(inputs, data_samples,
File "/media/hanxu/Documents/Paper003-underwater-segmentation/mmsegmentation-1.1.2/mmseg/models/decode_heads/decode_head.py", line 262, in loss
losses = self.loss_by_feat(seg_logits, batch_data_samples)
File "/media/hanxu/Documents/Paper003-underwater-segmentation/mmsegmentation-1.1.2/mmseg/models/decode_heads/decode_head.py", line 305, in loss_by_feat
seg_label = self._stack_batch_gt(batch_data_samples)
File "/media/hanxu/Documents/Paper003-underwater-segmentation/mmsegmentation-1.1.2/mmseg/models/decode_heads/decode_head.py", line 289, in _stack_batch_gt
return torch.stack(gt_semantic_segs, dim=0)
RuntimeError: stack expects each tensor to be equal size, but got [1, 320, 320] at entry 0 and [1, 339, 320] at entry 2
遇到了这个错误请问如何解决
The text was updated successfully, but these errors were encountered:
File "/media/hanxu/Documents/Paper003-underwater-segmentation/mmsegmentation-1.1.2/mmseg/models/segmentors/encoder_decoder.py", line 178, in loss
loss_decode = self._decode_head_forward_train(x, data_samples)
File "/media/hanxu/Documents/Paper003-underwater-segmentation/mmsegmentation-1.1.2/mmseg/models/segmentors/encoder_decoder.py", line 139, in _decode_head_forward_train
loss_decode = self.decode_head.loss(inputs, data_samples,
File "/media/hanxu/Documents/Paper003-underwater-segmentation/mmsegmentation-1.1.2/mmseg/models/decode_heads/decode_head.py", line 262, in loss
losses = self.loss_by_feat(seg_logits, batch_data_samples)
File "/media/hanxu/Documents/Paper003-underwater-segmentation/mmsegmentation-1.1.2/mmseg/models/decode_heads/decode_head.py", line 305, in loss_by_feat
seg_label = self._stack_batch_gt(batch_data_samples)
File "/media/hanxu/Documents/Paper003-underwater-segmentation/mmsegmentation-1.1.2/mmseg/models/decode_heads/decode_head.py", line 289, in _stack_batch_gt
return torch.stack(gt_semantic_segs, dim=0)
RuntimeError: stack expects each tensor to be equal size, but got [1, 320, 320] at entry 0 and [1, 339, 320] at entry 2
遇到了这个错误请问如何解决
The text was updated successfully, but these errors were encountered: