You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, thanks for providing the awesome code of GPT-GNN.
I am trying to run your code on OAG_CS dataset but I am not sure if I get it right. In the paper, the reported pre-training time is about 10-12 hours for 400 epochs while it took much longer on my side. I wonder if you could specify the requirements of the computational resources. For example, how many cpus do I need for achieving a pre-training time of 10 hours? I attached the output for my run as follows.
From your log, it seems the bottleneck is the sampling (which is conducted on CPU). My previous setting is 8* CPU E5-2698 v4 @ 2.20GHz. (But the machine also runs other experiments so it's just a reference)
My previous implementation of the sampling is not very efficient. I'll update them to make it more efficient later.
Hi, thanks for providing the awesome code of GPT-GNN.
I am trying to run your code on OAG_CS dataset but I am not sure if I get it right. In the paper, the reported pre-training time is about 10-12 hours for 400 epochs while it took much longer on my side. I wonder if you could specify the requirements of the computational resources. For example, how many cpus do I need for achieving a pre-training time of 10 hours? I attached the output for my run as follows.
The text was updated successfully, but these errors were encountered: