We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
When i try to run the model of the vision node on CUDA, i get a CUDA out of memory error.
It should be possible to use CUDA if available.
For now the device is set to CPU, but you can change it in vision_node.py
The issue includes updating the torch and cuda version
The text was updated successfully, but these errors were encountered:
Does this also happen when using FP16 precision for inference? Or is the error occurring despite having enough VRAM?
Sorry, something went wrong.
okrusch
No branches or pull requests
Current Behavior
When i try to run the model of the vision node on CUDA, i get a CUDA out of memory error.
Expected Behavior
It should be possible to use CUDA if available.
How to reproduce the issue
For now the device is set to CPU, but you can change it in vision_node.py
The issue includes updating the torch and cuda version
The text was updated successfully, but these errors were encountered: