-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Wrap CUDA specific methods in checks for CUDA being the actual device / default to device of comfy's model_manager #1
Comments
Hi, thank you very much for pointing this out. I will implement this check in the next version of the node. I confess that all this is a whole new world to me, and I'm learning a lot with people like you that take time to comment and give detailed explanations and directions! Once again, thank you so much. |
I'm happy to provide PRs on this module as I have an extreme interest in making DemoFusion work. Do you want a buddy? |
Waaaw, It would be amazing!! And as it's the first time I'm creating a comfyui node I confess that I have sooo many things to learn. And my main goal for this project is exactly what you described in your other comment: make it more comfyui-ish. So welcome and thank you!!! |
I'm glad to help any way that I can. I am also new to ComfyUI development, so hopefully the two of us can hobble along like a couple of rank idiots and make some progress on this thing! |
Possibly related to this, I'm receiving the following error while trying to run the node on an Apple M2 system: It would be nice if the node would use the ComfyUI I have 96GB RAM, so happy to test if the node works despite being memory hungry. |
It's on my plans to implement it. But as right now I just own a pc with nvidia card, it would be a little bit complicated to test if it is working :( |
The node implements the diffusers pipeline version of DemoFusion rather than using ComfyUI's own diffusion code. We intend to implement the algorithm using only Comfy code and once that is done, it will work on your Mac. |
Comfy supports directML for AMD cards out of box so there are AMD users.
Granted DirectML is terrible at dealing with memory management internally so I doubt this would work on their consumer cards (18GB on CUDA is probably the rough equivalent of 36GB on DirectML, and it OOMs instead of overflowing to system memory); there's a rough chance it would work on the W7800 or W7900 though, assuming anyone sane bought one. The current method also prevents Intel (they don't have a card with enough vram yet) and I suppose mac if anyone cares from working.
I'm using a 4090 now so this isn't an issue for me but it's just something to consider and doesn't look too hard based on the code. Whether it will work without further optimizations is another question but an OOM is better than a crash from a device that doesn't exist.
As far as CUDA running the VAE in bf16 if that's the option selected in comfy (I've yet to ever have a problem with it) is probably doable even if fp16 doesn't work and bf16 with bf16 outputs is faster by 2x over fp16 with fp32 accumulator or straight fp32 on current CUDA cards.
The text was updated successfully, but these errors were encountered: