You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For some reason, the final CIDEr/METEOR/BLEU scores on the validation set are not printed at the end of each epoch when I train. Here's what my output looks like when I train:
Just a bunch of Cider scores per batch, but no CIDEr/METEOR/BLEU between epochs...Is there any way to change this so that the the final CIDEr/METEOR/BLEU scores on the validation set are printed once per batch, at the end of the epoch? Or is there a way to calculate the final CIDEr/METEOR/BLEU scores on the validation set given the final checkpoint generated by training?
The text was updated successfully, but these errors were encountered:
So eval_results/xe_val.json is generated at some point (I'm not sure when and by which file), and this is exactly what I was looking for, but for post-sc training (so eval_results/sc_val.json). Any ideas on how this could be generated?
I'm sorry for bombarding you with all of these questions, I'm a beginner fascinated with your work. I really appreciate your patience.
No need to apologize for asking questions, it's great -- thanks for your interest in the project!
Even though the output file is named eval_results/xe_val.json, it will contain the results for post-sc training if you pass the appropriate flags into eval.py. Precisely, pass the post-sc model into the --model flag, and set --block_trigrams 1 and set the same --alpha as was used in training. Hope this helps!
For some reason, the final CIDEr/METEOR/BLEU scores on the validation set are not printed at the end of each epoch when I train. Here's what my output looks like when I train:
...
Cider scores: 0.05055747138711029
Cider scores: 0.23762433183792392
Cider scores: 0.2016461526656507
Cider scores: 0.09484974868502609
Cider scores: 0.06714491373311673
Read data: 1.02520489693
iter 42201 (epoch 28), avg_reward = -0.011, data_time = 0.014, time/batch = 1.025
Cider scores: 0.12018177868980709
Cider scores: 0.1537472398105469
Cider scores: 0.02683920305320616
Cider scores: 0.17457595857620622
Cider scores: 0.13120042844450647
...
Cider scores: 0.14903162980352952
Cider scores: 0.1480045371928352
Cider scores: 0.1597950489871271
Cider scores: 0.0681437261069909
Cider scores: 0.05616059625852486
Read data: 1.14169120789
iter 42401 (epoch 29), avg_reward = 0.016, data_time = 0.014, time/batch = 1.142
Cider scores: 0.13915968413271335
Cider scores: 0.2236347322797569
Cider scores: 0.11668309405737172
Cider scores: 0.3331930390082059
Cider scores: 0.0617720056876462
...
Just a bunch of Cider scores per batch, but no CIDEr/METEOR/BLEU between epochs...Is there any way to change this so that the the final CIDEr/METEOR/BLEU scores on the validation set are printed once per batch, at the end of the epoch? Or is there a way to calculate the final CIDEr/METEOR/BLEU scores on the validation set given the final checkpoint generated by training?
The text was updated successfully, but these errors were encountered: