Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

end-to-end GNN training and inference #11

Open
Dibbya40 opened this issue Jun 17, 2021 · 5 comments
Open

end-to-end GNN training and inference #11

Dibbya40 opened this issue Jun 17, 2021 · 5 comments

Comments

@Dibbya40
Copy link

Hello. Is the code for getting end-to-end GNN training and inference time, available for us?

@Huyuwei
Copy link
Contributor

Huyuwei commented Jun 18, 2021

FeatGraph is now partially integrated into DGL. Check https://github.com/dmlc/dgl/tree/master/featgraph

Some of the techniques in FeatGraph (e.g., feature dimension parallelization) have been used to optimize the hand-written kernels in DGL, while some techniques (mainly tiling) are not incorporated.

If you want to do end-to-end benchmarking, it should be ok to just run the master-branch DGL.

@Dibbya40
Copy link
Author

Thank you @Huyuwei

@Dibbya40
Copy link
Author

Dibbya40 commented Aug 5, 2021

Hello. I had one more question. As FeatGraph, which is based on TVM, is partially integrated to DGL. Are the GCN's forward and backward computation code written in TVM?

Thank you.

@yzh119
Copy link

yzh119 commented Aug 5, 2021

No DGL still uses CuSparse.

@Dibbya40
Copy link
Author

Dibbya40 commented Aug 5, 2021

Thank you @yzh119

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants