-
Notifications
You must be signed in to change notification settings - Fork 148
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEATURE] Allow running evaluation on validation set during training or right after training is done #613
Comments
Couple of things:
|
@tqtg, If we look closely at this function. MMNR uses different history matrices for train data, validation data, and test data. My idea is to breakdown the evaluation pipeline into train, validation, and test to reduce the redundancy of the current implementation (currently, cornac allows manipulating val_set along with train_set inside |
We provide models with both |
@tqtg let's say we evaluate |
Description
This is a breaking change. In many training/validation/test pipeline, the evaluation on validation data happens during training (for monitoring/model selection).
The current version of cornac evaluates validation set similar to test data.
Expected behavior with the suggested feature
n
epochs and report the validation results.Other Comments
The text was updated successfully, but these errors were encountered: