freebaseQA[1] is created for open-domain factoid question answering (QA) tasks over structured knowledge bases, like Freebase. Originally, it splits into 20,358 training, 3,994 eval and 3,996 test set.
This dataset can be downloaded via the link.
Model / System | Year | Exact Match | Accuracy | Hits@1 | Language | Reported by |
---|---|---|---|---|---|---|
DECAF (DPR + FiD-large) | 2022 | - | - | 79.0±0.6 | EN | Yu et al. |
DECAF (BM25 + FiD-large) | 2022 | - | - | 78.8± 0.5 | EN | Yu et al. |
FILM | 2022 | - | - | 63.3 | EN | Yu et al. |
CBR-SUBG | 2022 | - | - | 52.1 | EN | Yu et al. |
FAE | 2022 | - | 63.30 | - | EN | Das et al. |
EAE | 2022 | - | 53.40 | - | EN | Das et al. |
CBR-SUBG | 2022 | - | 52.07 | - | EN | Das et al. |
KGQA-RR(Luke) | 2023 | - | 42.08 | - | EN | Hu et al. |
KGQA-RR(Kepler) | 2023 | - | 42.02 | - | EN | Hu et al. |
KGQA-RR(Roberta) | 2023 | - | 41.97 | - | EN | Hu et al. |
KGQA-RR(Bert) | 2023 | - | 41.48 | - | EN | Hu et al. |
KGQA-RR(Albert) | 2023 | - | 41.16 | - | EN | Hu et al. |
KGQA-RR(XLnet) | 2023 | - | 41.15 | - | EN | Hu et al. |
KGQA-RR(DistilBert) | 2023 | - | 40.88 | - | EN | Hu et al. |
KGQA-RR(DistilRoberta) | 2023 | - | 39.36 | - | EN | Hu et al. |
KGQA-CL(Luke) | 2023 | - | 40.62 | - | EN | Hu et al. |
KGQA-CL(Roberta) | 2023 | - | 40.40 | - | EN | Hu et al. |
KGQA-CL(Kepler) | 2023 | - | 40.29 | - | EN | Hu et al. |
KGQA-CL(Bert) | 2023 | - | 40.12 | - | EN | Hu et al. |
KGQA-CL(DistilBert) | 2023 | - | 39.84 | - | EN | Hu et al. |
KGQA-CL(ALbert) | 2023 | - | 39.83 | - | EN | Hu et al. |
KGQA-CL(XLnet) | 2023 | - | 39.80 | - | EN | Hu et al. |
KGQA-CL(DistilRoberta) | 2023 | - | 39.43 | - | EN | Hu et al. |
KGQA-CL(GPT2) | 2023 | - | 39.03 | - | EN | Hu et al. |
BuboQA | 2022 | - | 38.25 | - | EN | Das et al. |
FOFE-net | 2019 | - | 37.00 | - | EN | Jiang et al. |
KBQA-Adapter | 2022 | - | 28.78 | - | EN | Das et al. |
KEQA | 2022 | - | 28.73 | - | EN | Das et al. |
HR-BiLSTM | 2022 | - | 28.40 | - | EN | Das et al. |
T5-XXL+WikiKG | 2022 | 47.25 | - | - | EN | Moiseev et al. |
T5-XXL+KELM | 2022 | 45.90 | - | - | EN | Moiseev et al. |
T5-XXL | 2022 | 45.02 | - | - | EN | Moiseev et al. |
T5-XXL+C4 | 2022 | 44.14 | - | - | EN | Moiseev et al. |
T5-large+WikiKG | 2022 | 35.29 | - | - | EN | Moiseev et al. |
T5-large+KELM | 2022 | 34.16 | - | - | EN | Moiseev et al. |
T5-large+C4 | 2022 | 34.01 | - | - | EN | Moiseev et al. |
T5-large | 2022 | 32.88 | - | - | EN | Moiseev et al. |
T5-base+WikiKG | 2022 | 28.38 | - | - | EN | Moiseev et al. |
T5-base+C4 | 2022 | 28.33 | - | - | EN | Moiseev et al. |
T5-base+KELM | 2022 | 28.15 | - | - | EN | Moiseev et al. |
T5-base | 2022 | 27.55 | - | - | EN | Moiseev et al. |
KGQA-RR(GPT2) | 2023 | - | 5.09 | - | EN | Hu et al. |
[1] Jiang, Kelvin et al. “FreebaseQA: A New Factoid QA Data Set Matching Trivia-Style Question-Answer Pairs with Freebase.” NAACL (2019).