REQ: GPU Weights et Diffusion in low bits (For LOW VRAM) + FLUX bnb #1247
guiteubeuh
started this conversation in
Ideas
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi,
Could you add somewhere in preference custom GPU weights and diffusion in low bits so we can use the FLUX model and LoRA with GPUs under 10 GB?
It’s implemented on FORGE, and I tested it with a 3080 10 GB. FLUX takes 30 seconds to generate a 10 steps 1280x720 image (FLUX dev FP8), while KRITA takes ages... My VRAM got saturated. FORGE explains the issue: if there isn't enough free VRAM, it will slow the process down by 10 times!
Support for FLUX BNB NF4 would be a great addition too!
For now, I’ve stopped using KRITA entirely because of this issue. Thank you for your time; it’s greatly appreciated.
Beta Was this translation helpful? Give feedback.
All reactions