This repository provides evaluation codes of LP-BERT for link prediction task. The idea of PLNLP is described in the following article:
Multi-task Pre-training Language Model for Semantic Network Completion(https://dl.acm.org/doi/abs/10.1145/3627704) LP-BERT: Multi-task Pre-training Knowledge Graph BERT for Link Prediction (https://arxiv.org/pdf/xx.pdf)
Only with BERT-base level parameters, LP-BERT achieves top-1 performance on both WN18RR and UMLS datasets.
The code is implemented with PyTorch. Requirments:
1. python=3.7
2. pytorch=1.9.0+cu102
3. transformers=4.2.1
4. numpy=1.17.2
5. pandas=0.25.1
6. sklearn=0.21.3
7. tqdm=4.52.0
python make_concat_data.py
CUDA_VISIBLE_DEVICES=0,1,2,3 python run_umls_pretrain.py
CUDA_VISIBLE_DEVICES=0 python run_umls_finetune.py
CUDA_VISIBLE_DEVICES=0,1,2 python run_wn18rr_pretrain.py
CUDA_VISIBLE_DEVICES=0 python run_wn18rr_finetune.py
CUDA_VISIBLE_DEVICES=0,1,2 python run_fb15k237_pretrain.py
CUDA_VISIBLE_DEVICES=0 python run_fb15k237_finetune.py