-
Notifications
You must be signed in to change notification settings - Fork 81
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introduce Abstract types for sparse arrays #577
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting!
FYI, CUDA.jl already has quite some native sparse operations implemented (including broadcast), so you may want to look there for inspiration, or to port additional functionality.
Yes I'm already using some of them. I recently needed to implement sparse support for Metal.jl, so I decided to implement it very generally here. I will for sure take inspiration from the CUDA sparse implementation. |
I find the AbstractGPUArray implementation very useful, which allows to have very generic methods for any GPU array, from CUDA.jl to Metal.jl and others.
Here, I try to implement such feature for sparse arrays. It is still a draft, but the basic method seems already to work. I can perform matrix-vector multiplication between a
JLSparseMatrixCSC
and a `JLVector.What I would like to implement in this PR:
I think this is already enough for a single PR.