-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Windows - RuntimeError: No available kernel. Aborting execution. #25
Comments
I successfully installed the correct versions of torch with CUDA 12.4 enabled through torch and xformers 0.0.28.post1 and still get this error. |
Check issue #17 |
Changing line 824 in Allegro/allegro/models/transformers/block.py |
Changing line 13 in single_inference.py |
Are there any other possible ways we can get this down to a reasonable time on a 24GB consumer GPU? |
Adding the |
@SoftologyPro Seems make sense. I tested on H100 enable-cpu-offload a single 100 steps video takes 1h10min. That's why I wrote the inference time will increase significantly |
No, I only have a single 4090. This interest came from a request for me to support Allegro in Visions of Chaos. But if it takes 2 hours on the best consumer GPU it is too slow for local Windows. If some speed breakthrough is made I will be happy to include it. |
@SoftologyPro Currently I have no idea. I suggest the method of distillation to reduce the inference steps like reduce from 100 steps to 4 steps, but it harms the quality severely. |
Trying to get this working under Windows.
I clone the repository, create a new venv and try and install requirements.txt. xformers fails with
If I try and install torch first before requirements it still fails.
So, I remove xformers and let the rest of the requirements finish.
Once they are done I install xformers and torch using...
Then when I run single_inference I get
What version of xformers and torch do I need to get this to work under Windows?
The text was updated successfully, but these errors were encountered: