There are source codes for Fine-grained Fact Verification with Kernel Graph Attention Network.
For more information about the FEVER 1.0 shared task can be found on this website.
Fact Extraction and Verification with SCIFACT
The shared task introduces scientific claim verification for helping scientists, clinicians, and public to verify the credibility of such claims with scientific literature, especially for the claims related to COVID-19.
>> Reproduce Our Results >> About SCIFACT Dataset >> Our Paper
- Python 3.X
- fever_score
- Pytorch
- pytorch_pretrained_bert
- transformers
- All data and BERT based chechpoints can be found at Ali Drive.
- RoBERTa based models and chechpoints can be found at Ali Drive.
- BERT based ranker.
- Go to the
retrieval_model
folder for more information.
- Pre-train BERT with claim-evidence pairs.
- Go to the
pretrain
folder for more information.
- Our KGAT model.
- Go to the
kgat
folder for more information.
The results are all on Codalab leaderboard.
User | Pre-train Model | Label Accuracy | FEVER Score |
---|---|---|---|
GEAR_single | BERT (Base) | 0.7160 | 0.6710 |
a.soleimani.b | BERT (Large) | 0.7186 | 0.6966 |
KGAT | RoBERTa (Large) | 0.7407 | 0.7038 |
KGAT performance with different pre-trained language model.
Pre-train Model | Label Accuracy | FEVER Score |
---|---|---|
BERT (Base) | 0.7281 | 0.6940 |
BERT (Large) | 0.7361 | 0.7024 |
RoBERTa (Large) | 0.7407 | 0.7038 |
CorefBERT (RoBERT Large) | 0.7596 | 0.7230 |
@inproceedings{liu2020kernel,
title={Fine-grained Fact Verification with Kernel Graph Attention Network},
author={Liu, Zhenghao and Xiong, Chenyan and Sun, Maosong and Liu, Zhiyuan},
booktitle={Proceedings of ACL},
year={2020}
}
@inproceedings{liu2020adapting,
title = {Adapting Open Domain Fact Extraction and Verification to COVID-FACT through In-Domain Language Modeling},
author = {Liu, Zhenghao and Xiong, Chenyan and Dai, Zhuyun and Sun, Si and Sun, Maosong and Liu, Zhiyuan},
booktitle = {Findings of the Association for Computational Linguistics: EMNLP 2020},
year={2020}
}
If you have questions, suggestions and bug reports, please email: