Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiple inconsistent training results #37

Open
ZhangYN1226 opened this issue Jan 30, 2024 · 1 comment
Open

Multiple inconsistent training results #37

ZhangYN1226 opened this issue Jan 30, 2024 · 1 comment

Comments

@ZhangYN1226
Copy link

First of all, thank you very much for your code! This has been very helpful to me! But I want to ask one question: "Taking the 1-1 dataset of SMD as an example, why is it that after multiple training and testing sessions, the f1 index of the test results is different, and the difference is significant! Sometimes f1 can reach 0.7, and sometimes it soars to 0.8. This way, I cannot judge the true performance of the model at all! May I ask why this is? And how should I solve it?"

@efvik
Copy link

efvik commented Feb 16, 2024

Hi, it is typical for deep learning models to give different results between each training. This is because there is randomness in many functions such as how weights are initalized, and how the dataset is mixed/shuffled. If you want to have more consistent results then look here: https://pytorch.org/docs/stable/notes/randomness.html

Apart from this it is also normal to train the model many separate times (5, 10 or more) and take the mean and standard deviation of the results. This is how you usually see the results presented and compared in academic publications.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants