You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I think it may be expected not to encounter early stopping under certain conditions. Our experiments used only a training dataset, no held-out validation set, and it's possible that the model could keep improving through the max number of iterations, including possibly overfitting to the training data. (I would not design experiments that way today, but it's what we did at the time with the datasets we studied.)
For you, it may currently be hard to tell the difference between optimization progress due to legitimate improvements vs overfitting. Perhaps you can tell this by using a held-out set for early stopping detection? If this held-out loss continues to decrease through to the max number of iterations, the longer training duration seems necessary.
clstm can achieve early stopping on simulated data, but the loss keeps decreasing on real datasets and cannot achieve early stopping.
The text was updated successfully, but these errors were encountered: