The RELX Dataset and Matching the Multilingual Blanks for Cross-lingual Relation Classification, EMNLP-Findings 2020.
Paper: https://www.aclweb.org/anthology/2020.findings-emnlp.32/
KBP-37 (English): Download
RELX | RELX-Distant Sample | RELX-Distant | |
---|---|---|---|
English | Download | Download | Download |
French | Download | Download | Download |
German | Download | Download | Download |
Spanish | Download | Download | Download |
Turkish | Download | Download | Download |
We pretrained public checkpoint of Multilingual BERT with 20 million pairs of sentences from RELX-Distant (including English, French, German, Spanish, and, Turkish) with Masked Language Model (MLM) and Matching the Multilingual Blanks (MTMB) objectives.
You can use pretrained MTMB model over MBERT from HuggingFace Model Hub:
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("akoksal/MTMB")
model = AutoModel.from_pretrained("akoksal/MTMB")
Check out finetune.py for more details of finetuning on KBP-37 and evaluating on RELX & test set of KBP-37.
KBP-37 Dev | KBP-37 Test | RELX-EN | RELX-FR | RELX-DE | RELX-ES | RELX-TR | |
---|---|---|---|---|---|---|---|
MBERT | 65.5 | 64.9 | 61.8 | 58.3 | 57.5 | 57.9 | 55.8 |
MBERT+MTMB | 66.8 | 66.5 | 63.6 | 59.9 | 59.9 | 62.4 | 56.2 |
F1 scores of 10 runs. See paper for more details.
- Please cite the following paper if you use any part of this work:
@inproceedings{koksal-ozgur-2020-relx,
title = "The {RELX} Dataset and Matching the Multilingual Blanks for Cross-Lingual Relation Classification",
author = {K{\"o}ksal, Abdullatif and
{\"O}zg{\"u}r, Arzucan},
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.32",
doi = "10.18653/v1/2020.findings-emnlp.32",
pages = "340--350",
}