Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Spconv 2.2.x #145

Open
wants to merge 31 commits into
base: master
Choose a base branch
from
Open

Spconv 2.2.x #145

wants to merge 31 commits into from

Conversation

L-Reichardt
Copy link

This pull request is just so this fork shows up linked to original repository, and does not need to be merged.

I have retained with Spconv 2.1.21, with slightly lower results. The purpose of this pull request is just to offer weights/code with the updated Spconv library.

@xizaoqu
Copy link

xizaoqu commented Jul 17, 2022

Counld you provide the setting or log file of the training results of mIOU 63.5? Thanks!

@L-Reichardt
Copy link
Author

@xizaoqu I've updated the ReadMe in my fork.

@xizaoqu
Copy link

xizaoqu commented Jul 18, 2022

@xizaoqu I've updated the ReadMe in my fork.

The LR is 10 times larger than the official setting. Do you use multiple GPUs for training?
Besides, do you suffer from overfitting? I use 4GPUs with the batch size of 4 and get the best mIOU of ~63 at about the 10th epoch. But afterward, the performance will jitter dramatically, sometimes even below 59.

@L-Reichardt
Copy link
Author

L-Reichardt commented Jul 18, 2022

@xizaoqu Thank you, it was a typo. I've used the settings in the config file (LR of 0.001). I used a single Nvidia 3060 for training.

I had the same "issue". The model reached peak performance at around 10-15 epochs. Most likely the model can be further improved with an ablation study on the training hyperparameters and heavier augmentation. I wont train it anymore for some time though as I am working on another project. Maybe in the future, I will retrain the Knowledge-Distilled version to the new SPConv Version

@xizaoqu
Copy link

xizaoqu commented Jul 18, 2022

@xizaoqu Thank you, it was a typo. I've used the settings in the config file (LR of 0.001). I used a single Nvidia 3060 for training.

I had the same "issue". The model reached peak performance at around 10-15 epochs. Most likely the model can be further improved with an ablation study on the training hyperparameters and heavier augmentation. I wont train it anymore for some time though as I am working on another project. Maybe in the future, I will retrain the Knowledge-Distilled version to the new SPConv Version

I see. Thanks!

@L-Reichardt L-Reichardt changed the title Spconv 2.1.21 Spconv 2.2.x Oct 5, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants