-
-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Log regression task metrics in multitask model #3648
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are there metrics for regression type models that we could put into scores
, or do those perhaps go into the "classification report" - e.g. Pearson, spearman. In the regression models, results is written as
eval_metrics = {
"loss": eval_loss.item(),
"mse": metric.mean_squared_error(),
"mae": metric.mean_absolute_error(),
"pearson": metric.pearsonr(),
"spearman": metric.spearmanr(),
}
So maybe we could either check for the base model class that defines evaluate, or just check for the keys. Then maybe we could write e.g. scores[(task_id, 'mse')]
. What do you think?
Ok I added mse. |
Oh actually, can we just add all four of the metrics? |
Done |
@ntravis22 @MattGPT-ai Could you paste a script to test this PR? |
Recently per-task metrics were added to multitask_model, however we did not include any for regression tasks and we did not check that the metric keys are present which can throw an error, so this addresses both of those concerns.