You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 3, 2023. It is now read-only.
If someone would like to use ngraph for GPP, instead of solely for DL, how could he/she add additional graph functions? Is there a document / example that describes the steps?
Thanks,
Pedro N.
The text was updated successfully, but these errors were encountered:
First, this is an area quite likely to change significantly.
Look at op_graph.py. For a tensor operation, you can either just define a function that constructs a few ops, or define a new op. If you define a new op, you need to add something in the cpu transformer and gpu transformer to tell how to execute the op. In the future, I expect that you'll be able to do that without modifying existing functions/methods. For GPU, you need to reshape your tensor dimensions from arbitrary to whatever the kernels you are using expect, while for NumPy you just need tensor striding that NumPy can work with.
If you need autodiff to work for your new op, there's a partial explanation in the docs about how autodiff works, and there are a lot of examples. Docs on this will be more extensive in the future.
Dear Nervana team,
First and foremost, great work!
If someone would like to use ngraph for GPP, instead of solely for DL, how could he/she add additional graph functions? Is there a document / example that describes the steps?
Thanks,
Pedro N.
The text was updated successfully, but these errors were encountered: