-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
end-to-end GNN training and inference #11
Comments
FeatGraph is now partially integrated into DGL. Check https://github.com/dmlc/dgl/tree/master/featgraph Some of the techniques in FeatGraph (e.g., feature dimension parallelization) have been used to optimize the hand-written kernels in DGL, while some techniques (mainly tiling) are not incorporated. If you want to do end-to-end benchmarking, it should be ok to just run the master-branch DGL. |
Thank you @Huyuwei |
Hello. I had one more question. As FeatGraph, which is based on TVM, is partially integrated to DGL. Are the GCN's forward and backward computation code written in TVM? Thank you. |
No DGL still uses CuSparse. |
Thank you @yzh119 |
Hello. Is the code for getting end-to-end GNN training and inference time, available for us?
The text was updated successfully, but these errors were encountered: