Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Input size and experiment parameters #13

Open
arthur-andre opened this issue Jun 26, 2024 · 3 comments
Open

Input size and experiment parameters #13

arthur-andre opened this issue Jun 26, 2024 · 3 comments

Comments

@arthur-andre
Copy link

Hello,

I'm trying to re-use your model either on the SporsMOT dataset or my own dataset but in each case, I don't obtain any result using your tracking experiment :

'''
python3 tools/track_mixsort.py -expn SportsMOT -f exps/example/mot/yolox_x_sportsmot.py -c pretrained/yolox_x_sports_train.pth.tar -b 1 -d 1 --config track
'''

I tried to understand where the issue was detected, and I found it during the model inference.

My guess would be on the input size and the parameters of the experiment.

Either the SportsMOT or my own dataset contains images with (720, 1280, 3) shapes. Therefore I changed the experiment parameters with the following input and test size :

image

However, I obtain errors during the model inference :

image

The script quits after the previous screen.

Maybe I forgot a preprocessing step to have the right size of the input (I also tried with the raw parameters of the GitHub, i.e. self.test_size = (800, 1440), wasn't working too)

@ret-1
Copy link
Collaborator

ret-1 commented Jun 26, 2024

Hi @arthur-andre ,

It seems like the program quits without any error output, so I recommend you to add some breakpoints and run the program step by step to see where it fails to continue running.

@arthur-andre
Copy link
Author

Hi,

The program quits exactly at the concatenation step during the forward (torch.cat).

image

As it wants to concatenate torch.Size([1, 640, 46, 80]) and torch.Size([1, 640, 45, 80]) (i.e. the size in the output prints above)

Maybe it should have been the same, that's why I asked before for the input size.

@ret-1
Copy link
Collaborator

ret-1 commented Jun 28, 2024

Hello @arthur-andre ,

I think this is the problem within the yolox. I directly import YOLOPAFPN module from yolox.models, and this error would occur regardless of the shape of the input tensor.

I recommend you to check the version of pytorch, or seek help at https://github.com/Megvii-BaseDetection/YOLOX

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants