-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
use end-to-end DGL scripts run featGraph #13
Comments
You might checkout this branch of DGL: |
Thanks for your reply. I just clarified my question by re-editing the post above. Can you respond again? Thank you. |
I used the DGL test scripts to run the GCN on PubMed and Cora dataset with extra one line of code: |
I don't think Featgraph has better performance against cusparse for GCN on GPU, see table IV in the paper, since DGL uses cusparse, it's normal that you don't observe any acceleration here. |
Thank you very much for your response. I am closing this issue. |
Sorry I just noticed that you were using Only the branch I mentioned (https://github.com/kira-lin/dgl/tree/tvm_integration) contains the complete code that uses featgraph backend. Regarding the question in #14 , yes GAT is also supported (it was mentioned in the paper), and we can use it by compiling the |
If you are interested in native sparse support of TVM, our work is coming soon, please stay tuned. |
Hi, thank you for the kind response. For the branch https://github.com/kira-lin/dgl/tree/tvm_integration, If I want to use the featGraph backend, what is the specific python code I needed to write? For example, If I only write The ReadMe file in https://github.com/kira-lin/dgl/tree/tvm_integration/featgraph only shows to run test.py to verify the. correctness. However, the test.py only contains a test case kernel: |
This is the step we followed:
Thank you very much for your help. |
Oh sorry, what I mean is the tvm-kernel branch. |
Hi, the tvm-kernel branch you mentioned does not include the 'featGraph' folder. Therefore, I am not sure how to compile it specifically for featgraph and how to verify whether the featgraph is installed correctly or not. Could you provide me with more instructions? Thank you. |
The tvm-kernel branch is fully Python based, and featgraph kernels would be triggered when you set the environment variable See https://github.com/kira-lin/dgl/blob/tvm-kernel/python/dgl/sparse.py#L13-L16 |
Btw I do think you are not expected to see speedup using featgraph against DGL 0.8 because most of the optimized kernels have already been merged into DGL. |
based on line 13, we make sure use_tvm is True, unfortunately, it crashes. When use_tvm is False, it does run, but I suspect it is calling DGL kernels. We are still interested in running FeatGraph end-to-end. Do let us know if there are any other instructions. |
Would you mind elaborating the error message so that we can debug why crashes? |
Here is what the error I got:
|
This is due to the TVM version, you should use TVM 0.7. |
Hi, I want to run the featGraph end-to-end.
I have already built the DGL (with featGraph) and run the test.py file successfully using the instructions posted in https://github.com/dmlc/dgl/tree/master/featgraph.
The text was updated successfully, but these errors were encountered: