You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We would like the Opacus DPSGD to work with the case where the neural network input is a torch sparse coo tensor.
Motivation
Similar to issue #350 , there are cases where the input of the neural network is a torch sparse tensor. In our case, our data is exactly a torch sparse coo tensor and it is impossible to fit the dense version of it into GPU. It would be great if Opacus DPSGD (grad_sampler...etc) is compatible with input of the nerual networks being a sparse tensor.
Pitch
We would like Opacus to be compatible with the case where torch sparse coo tensor is the neural network input. Currently, even if I modify the grad_sample_module.py L62 from = grad_sample to += grad_sample to prevent errors, the results are still incorrect. That is, the resulting gradients are different (with a fixed seed) for dense input vs sparse input. The model cannot be trained well with the sparse input while it can with the dense input. It would be a great help if there is any suggestion on solving this issue.
Alternatives
None.
Additional context
None.
Looking forward to hearing back from you, thank you in advance!
The text was updated successfully, but these errors were encountered:
Sorry for my late response. Currently, we use it in our own project and it may be hard to release the code before its publication. Nevertheless, I'll try to write a minimal example to reproduce the problem (maybe as a Jupyter/Colab notebook). I will get it back to you (hopefully) in a few days. I apologize that I'm pretty busy this week...
I manage to write the following minimal example Colab.
Note that I modify grad_sample_module.py L62 from = grad_sample to += grad_sample to prevent errors as I mentioned earlier. The version of the packages should not matter I guess. Please let me know if you found any errors in my code.
I guess the issue might be at the function prepare_module() in my code? If not, then it seems like the opacus does not support sparse tensor input correctly? Sorry that I'm not very familiar with opacus so trivial mistake may happen...
🚀 Feature
We would like the Opacus DPSGD to work with the case where the neural network input is a torch sparse coo tensor.
Motivation
Similar to issue #350 , there are cases where the input of the neural network is a torch sparse tensor. In our case, our data is exactly a torch sparse coo tensor and it is impossible to fit the dense version of it into GPU. It would be great if Opacus DPSGD (grad_sampler...etc) is compatible with input of the nerual networks being a sparse tensor.
Pitch
We would like Opacus to be compatible with the case where torch sparse coo tensor is the neural network input. Currently, even if I modify the grad_sample_module.py L62 from
= grad_sample
to+= grad_sample
to prevent errors, the results are still incorrect. That is, the resulting gradients are different (with a fixed seed) for dense input vs sparse input. The model cannot be trained well with the sparse input while it can with the dense input. It would be a great help if there is any suggestion on solving this issue.Alternatives
None.
Additional context
None.
Looking forward to hearing back from you, thank you in advance!
The text was updated successfully, but these errors were encountered: