You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I try to train FastestDet with different resolution, not default 352x352 (for example, 240x240), I got RuntimeError:
Load yaml sucess...
<utils.tool.LoadYaml object at 0x00000299198CD8B0>
Initialize params from:./module/shufflenetv2.pth
Traceback (most recent call last):
File "E:\Repositories\FastestDet\train.py", line 134, in <module>
model = FastestDet()
File "E:\Repositories\FastestDet\train.py", line 42, in __init__
summary(self.model, input_size=(3, self.cfg.input_height, self.cfg.input_width))
File "C:\Users\Reutov\.anaconda3\envs\experimental_env\lib\site-packages\torchsummary\torchsummary.py", line 72, in summary
model(*x)
File "C:\Users\Reutov\.anaconda3\envs\experimental_env\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "E:\Repositories\FastestDet\module\detector.py", line 25, in forward
P = torch.cat((P1, P2, P3), dim=1)
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 15 but got size 16 for tensor number 2 in the list.
Shapes of P1, P2 and P3 (240х240): torch.Size([2, 48, 30, 30]) torch.Size([2, 96, 15, 15]) torch.Size([2, 192, 8, 8])
Shapes of P1, P2 and P3 (358x358): torch.Size([2, 48, 44, 44]) torch.Size([2, 96, 22, 22]) torch.Size([2, 192, 11, 11])
I can guess that I need to change the architecture of the backbone network a little bit, but pretrained weights will not to load correctly in this case
How can I to change input resolution for training FastestDet?
Can I to change input resolution of FastestDet in convert to ONNX process?
Thanks in advance :)
The text was updated successfully, but these errors were encountered:
Hi :)
Thanks for this repo
I have a problem with setting a custom input resolution in YAML config
When I try to train FastestDet with different resolution, not default 352x352 (for example, 240x240), I got RuntimeError:
Shapes of P1, P2 and P3 (240х240):
torch.Size([2, 48, 30, 30]) torch.Size([2, 96, 15, 15]) torch.Size([2, 192, 8, 8])
Shapes of P1, P2 and P3 (358x358):
torch.Size([2, 48, 44, 44]) torch.Size([2, 96, 22, 22]) torch.Size([2, 192, 11, 11])
I can guess that I need to change the architecture of the backbone network a little bit, but pretrained weights will not to load correctly in this case
How can I to change input resolution for training FastestDet?
Can I to change input resolution of FastestDet in convert to ONNX process?
Thanks in advance :)
The text was updated successfully, but these errors were encountered: