You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi there! First of all nice work on the library, looks great.
I am in the special situation where I have a large dataset for pretraining but without any labels. I would still like to do linear and k-NN eval online on multiple smaller, related labelled datasets to measure transfer performance. This is different from the standard benchmarks where we train and evaluate on splits of the same (usually labelled) dataset, but I think should be a very common problem. It seems like this is not possible in solo-learn at the moment, is this correct? If yes, what changes would be necessary?
Thank you.
The text was updated successfully, but these errors were encountered:
nsmdgr
changed the title
Separate train and validation datasets with and without labels
Separate train and validation datasets without and with labels
Sep 27, 2021
I agree this makes a lot of sense. We will try to implement it soon. It is quite easy to do, you just need to pass a dictionary with two train loaders to trainer.fit() and then unpack the dict in the training_step of theBaseModel and BaseMomentumModel.
Hi there! First of all nice work on the library, looks great.
I am in the special situation where I have a large dataset for pretraining but without any labels. I would still like to do linear and k-NN eval online on multiple smaller, related labelled datasets to measure transfer performance. This is different from the standard benchmarks where we train and evaluate on splits of the same (usually labelled) dataset, but I think should be a very common problem. It seems like this is not possible in solo-learn at the moment, is this correct? If yes, what changes would be necessary?
Thank you.
The text was updated successfully, but these errors were encountered: