- python=3.9.18
- pytorch=2.0.1
- numpy=1.24.3
- pandas=1.5.3
- scikit-learn==1.3.0
- wandb=0.16.3
- tqdm
- tabulate
Thirteen datasets are used in our experiments, including Wikipedia, Reddit, MOOC, LastFM, Enron, Social Evo., UCI, Flights, Can. Parl., US Legis., UN Trade, UN Vote, and Contact. The first four datasets are bipartite, and the others only contain nodes with a single type.
The used original dynamic graph datasets come from Towards Better Evaluation for Dynamic Link Prediction, which can be downloaded here.
Please first download them and put them in DG_data
folder. Before preprocessing datasets, we need to create necessary directories by the following command.
mkdir processed_data logs saved_results saved_models wandb
Then we can run preprocess_data/preprocess_data.py
for pre-processing the datasets.
For example, to preprocess the Wikipedia dataset, we can run the following commands:
cd preprocess_data/
python preprocess_data.py --dataset_name wikipedia
We can also run the following commands to preprocess all the original datasets at once:
cd preprocess_data/
python preprocess_all_data.py
- Example of training TPNet on Wikipedia dataset:
python train_link_prediction.py --prefix std --dataset_name wikipedia --model_name TPNet --num_runs 5 --gpu 0 --use_random_projection
- If you want to use the best model configurations to train TPNet on Wikipedia dataset, run
python train_link_prediction.py --prefix std --dataset_name wikipedia --model_name TPNet --num_runs 5 --gpu 0 --use_random_projection --load_best_configs
Three (i.e., random, historical, and inductive) negative sampling strategies can be used for model evaluation.
- Example of evaluating TPNet with random negative sampling strategy on Wikipedia dataset:
python evaluate_link_prediction.py --prefix std --dataset_name wikipedia --model_name TPNet --num_runs 5 --gpu 0 --use_random_projection --negative_sample_strategy random
- If you want to use the best model configurations to evaluate TPNET with random negative sampling strategy on Wikipedia dataset, run
python evaluate_link_prediction.py --prefix std --dataset_name wikipedia --model_name TPNet --num_runs 5 --gpu 0 --use_random_projection --load_best_configs --negative_sample_strategy random
You can refer to the demo_on_matrix_updating.ipynb
file for more information about the updating functions of different temporal walk matrices.
We are grateful to the authors of DyGLib, PINT, andNAT for making their project codes publicly available.