This repo is for our ACL 2023 paper "Revisiting the Gold Standard: Grounding Summarization Evaluation with Robust Human Evaluation". We provide the scripts for our RoSE benchmark and meta-evaluation.
Please visit here for a demo page of this project.
RoSE can be downloaded with Hugging Face Datasets under Salesforce/rose
.
We provide a notebook, demo.ipynb, for basic usage of our dataset.
RoSE benchmark contains system outputs annotated with our ACU protocol. It contains four parts:
- CNNDM, test set annotations
- CNNDM, validation set annotations
- XSum, test set annotations
- SamSum, test set annotations
We summarize the statistics below.
Dataset | Split | #Doc. | #Sys. | #Total Summ. | HF Name |
---|---|---|---|---|---|
CNNDM | Test | 500 | 12 | 6000 | cnndm_test |
CNNDM | Validation | 1000 | 8 | 8000 | cnndm_validation |
XSum | Test | 500 | 8 | 4000 | xsum |
SamSum | Test | 500 | 8 | 4000 | samsum |
We have system outputs annotated with four different human evaluation protocols in total. We summarize them below.
Protocol | w/ Input Document | w/ Reference Summary | Fine-grained |
---|---|---|---|
Prior | ✗ | ✗ | ✗ |
Ref-free | ✓ | ✗ | ✗ |
Ref-based | ✗ | ✓ | ✗ |
ACU | ✗ | ✓ | ✓ |
We annotated two sets of system summaries.
- Summaries of 12 fine-tuned systems. The huggingface data split name is
cnndm_protocol
. - Zero-shot summaries from large language models (GPT3, T0), together with summaries from BRIO and BART. The huggingface data split name is
cnndm_protocol_gpt3
.
We provide scripts for statistical analysis of the meta-evaluation results in our paper.
functions for computing correlation coefficients
functions for conducting statistical tests, including bootstrap, permutation test and computing confidence interval
functions for computing power analysis, please note that computing power analysis can be time-consuming, and please maximize the number of processes to speed up the computation
demo script for utilizing the functions in the above files
Please cite our paper if you use RoSE in your work:
@inproceedings{Liu2022RevisitingTG,
title={Revisiting the Gold Standard: Grounding Summarization Evaluation with Robust Human Evaluation},
author={Yixin Liu and Alexander R. Fabbri and Pengfei Liu and Yilun Zhao and Linyong Nan and Ruilin Han and Simeng Han and Shafiq R. Joty and Chien-Sheng Wu and Caiming Xiong and Dragomir R. Radev},
booktitle={Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics},
year={2023},
}