Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Quantized model #24

Open
cocktailpeanut opened this issue Oct 27, 2024 · 1 comment
Open

Quantized model #24

cocktailpeanut opened this issue Oct 27, 2024 · 1 comment

Comments

@cocktailpeanut
Copy link

It takes 30 minutes to generate a 6 second video on a 4090. And I heard 60 minutes on 3090s.

I think the project will gain significantly more traction if there's an OFFICIAL effort from the authors to provide a quantized version AND code to run those quantized models.

@nitinmukesh
Copy link

Quantized model will not help. I am using it and it is very very very slow.

Fits well on 8GB VRAM
image

but inference is so slow (this is with 20 steps and it need 100 steps to generate decent video)
15%|████████▊ | 3/20 [19:25<1:49:36, 386.84s/it]

With 100 steps it shows 11 hours for 6 sec video

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants