You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for this helpful notebook. Can you explain why you return the best cv scores in the objective function? Isn't that an overly optimistic (upward bias) measure of performance?
I believe the results of the n_fold cross validations should be averaged and returned as the score/loss. Am I missing something?
The text was updated successfully, but these errors were encountered:
You are correct that the average metric from all the folds should be used as the measure of performance. If you look at the code for the objective function, I do return the average roc-auc over the folds.
best_score = np.max(cv_results['auc-mean'])
loss = 1 - best_score
auc-mean is the average auc over the cross-validation folds.
Thank you for this helpful notebook. Can you explain why you return the best cv scores in the objective function? Isn't that an overly optimistic (upward bias) measure of performance?
I believe the results of the n_fold cross validations should be averaged and returned as the score/loss. Am I missing something?
The text was updated successfully, but these errors were encountered: