How to evalute tsai models using cross-validation #122
-
The answer might be obvious for folks familiar with different deep learning libraries, but for me it is not and I want to avoid falling into any potential traps. From scikit-learn, I know about their cross-validate function, which is extremely easy (and safe?) to use. What is the easiest way to perform this using tsai? I've seen different examples for splitting the data for cross-validation in the tsai docs, so I assume it's easy once I understand the API better. Or maybe there is even an example somewhere in the tutorials, which I've missed. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 6 replies
-
I think the easiest way to get the classic K-fold validation would be to create multiple splits of the data calling Best! |
Beta Was this translation helpful? Give feedback.
I think the easiest way to get the classic K-fold validation would be to create multiple splits of the data calling
get_splits
withn_splits = K
andshuffle=False
, and then iterate on the resulting list. Learner-wise I do not know if fastai has something to do it directly, although I doubt it, since it's not something that Jeremy applies often in practical deep learning.Best!