-
Notifications
You must be signed in to change notification settings - Fork 224
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add metrics to compute fluency of references #129
Comments
Hey, you can check out the "Naturalness" metric as a substitute for perplexity as it correlates better with human judgment in measuring the fluency or coherence of a sentence. https://github.com/passeul/style-transfer-model-evaluation |
Thanks! It seems to be just the tool I need. Yes, it will be useful to integrate it as a feature to this repo. |
I tested the naturalness metric in https://github.com/passeul/style-transfer-model-evaluation and it works quite nicely. It should be useful in Text Style Transfer and Summarization tasks or even just Generation tasks to check the quality. Integrating it into this repo should be straightforward as well. Ill link a PR to this issue and merge it for others to use :) |
I have tested the new feature on my branch and it is working. This is the pull request for the issue. Feel free to clone my branch and use it incase it takes time for the pull request to get merged. |
Thanks a lot! |
Apart from the content preservation metrics that this repo contains, it is also common to check the fluency/ coherence of the generated outputs. Forward PPL using a pretrained language model is generally used to measure the fluency of the outputs but there seem to be other newly introduced metrics as well... I was wondering if anyone knows of some tools to compute fluency of sentences apart from PPL? It would be a useful feature to add to this repository as well. Any help is much appreciated!
The text was updated successfully, but these errors were encountered: