You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of all, thank you very much for your code! This has been very helpful to me! But I want to ask one question: "Taking the 1-1 dataset of SMD as an example, why is it that after multiple training and testing sessions, the f1 index of the test results is different, and the difference is significant! Sometimes f1 can reach 0.7, and sometimes it soars to 0.8. This way, I cannot judge the true performance of the model at all! May I ask why this is? And how should I solve it?"
The text was updated successfully, but these errors were encountered:
Hi, it is typical for deep learning models to give different results between each training. This is because there is randomness in many functions such as how weights are initalized, and how the dataset is mixed/shuffled. If you want to have more consistent results then look here: https://pytorch.org/docs/stable/notes/randomness.html
Apart from this it is also normal to train the model many separate times (5, 10 or more) and take the mean and standard deviation of the results. This is how you usually see the results presented and compared in academic publications.
First of all, thank you very much for your code! This has been very helpful to me! But I want to ask one question: "Taking the 1-1 dataset of SMD as an example, why is it that after multiple training and testing sessions, the f1 index of the test results is different, and the difference is significant! Sometimes f1 can reach 0.7, and sometimes it soars to 0.8. This way, I cannot judge the true performance of the model at all! May I ask why this is? And how should I solve it?"
The text was updated successfully, but these errors were encountered: