Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Calculating METEOR CIDEr BLEU #4

Open
arjung128 opened this issue Jun 3, 2019 · 3 comments
Open

Calculating METEOR CIDEr BLEU #4

arjung128 opened this issue Jun 3, 2019 · 3 comments

Comments

@arjung128
Copy link

For some reason, the final CIDEr/METEOR/BLEU scores on the validation set are not printed at the end of each epoch when I train. Here's what my output looks like when I train:

...
Cider scores: 0.05055747138711029
Cider scores: 0.23762433183792392
Cider scores: 0.2016461526656507
Cider scores: 0.09484974868502609
Cider scores: 0.06714491373311673
Read data: 1.02520489693
iter 42201 (epoch 28), avg_reward = -0.011, data_time = 0.014, time/batch = 1.025
Cider scores: 0.12018177868980709
Cider scores: 0.1537472398105469
Cider scores: 0.02683920305320616
Cider scores: 0.17457595857620622
Cider scores: 0.13120042844450647
...
Cider scores: 0.14903162980352952
Cider scores: 0.1480045371928352
Cider scores: 0.1597950489871271
Cider scores: 0.0681437261069909
Cider scores: 0.05616059625852486
Read data: 1.14169120789
iter 42401 (epoch 29), avg_reward = 0.016, data_time = 0.014, time/batch = 1.142
Cider scores: 0.13915968413271335
Cider scores: 0.2236347322797569
Cider scores: 0.11668309405737172
Cider scores: 0.3331930390082059
Cider scores: 0.0617720056876462
...

Just a bunch of Cider scores per batch, but no CIDEr/METEOR/BLEU between epochs...Is there any way to change this so that the the final CIDEr/METEOR/BLEU scores on the validation set are printed once per batch, at the end of the epoch? Or is there a way to calculate the final CIDEr/METEOR/BLEU scores on the validation set given the final checkpoint generated by training?

@lukemelas
Copy link
Owner

Yes, you can us eval.py to get the val scores post-training.

@arjung128
Copy link
Author

Thanks for your reply!

So eval_results/xe_val.json is generated at some point (I'm not sure when and by which file), and this is exactly what I was looking for, but for post-sc training (so eval_results/sc_val.json). Any ideas on how this could be generated?

I'm sorry for bombarding you with all of these questions, I'm a beginner fascinated with your work. I really appreciate your patience.

@lukemelas
Copy link
Owner

No need to apologize for asking questions, it's great -- thanks for your interest in the project!

Even though the output file is named eval_results/xe_val.json, it will contain the results for post-sc training if you pass the appropriate flags into eval.py. Precisely, pass the post-sc model into the --model flag, and set --block_trigrams 1 and set the same --alpha as was used in training. Hope this helps!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants