Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wrap CUDA specific methods in checks for CUDA being the actual device / default to device of comfy's model_manager #1

Open
NeedsMoar opened this issue Dec 12, 2023 · 7 comments
Labels
bug Something isn't working enhancement New feature or request

Comments

@NeedsMoar
Copy link

Comfy supports directML for AMD cards out of box so there are AMD users.

Granted DirectML is terrible at dealing with memory management internally so I doubt this would work on their consumer cards (18GB on CUDA is probably the rough equivalent of 36GB on DirectML, and it OOMs instead of overflowing to system memory); there's a rough chance it would work on the W7800 or W7900 though, assuming anyone sane bought one. The current method also prevents Intel (they don't have a card with enough vram yet) and I suppose mac if anyone cares from working.

I'm using a 4090 now so this isn't an issue for me but it's just something to consider and doesn't look too hard based on the code. Whether it will work without further optimizations is another question but an OOM is better than a crash from a device that doesn't exist.

As far as CUDA running the VAE in bf16 if that's the option selected in comfy (I've yet to ever have a problem with it) is probably doable even if fp16 doesn't work and bf16 with bf16 outputs is faster by 2x over fp16 with fp32 accumulator or straight fp32 on current CUDA cards.

@deroberon
Copy link
Owner

Hi, thank you very much for pointing this out. I will implement this check in the next version of the node. I confess that all this is a whole new world to me, and I'm learning a lot with people like you that take time to comment and give detailed explanations and directions!

Once again, thank you so much.

@deroberon deroberon added bug Something isn't working enhancement New feature or request labels Dec 12, 2023
@ttulttul
Copy link
Collaborator

I'm happy to provide PRs on this module as I have an extreme interest in making DemoFusion work. Do you want a buddy?

@deroberon
Copy link
Owner

Waaaw, It would be amazing!! And as it's the first time I'm creating a comfyui node I confess that I have sooo many things to learn. And my main goal for this project is exactly what you described in your other comment: make it more comfyui-ish. So welcome and thank you!!!

@ttulttul
Copy link
Collaborator

I'm glad to help any way that I can. I am also new to ComfyUI development, so hopefully the two of us can hobble along like a couple of rank idiots and make some progress on this thing!

@alessandroperilli
Copy link

Possibly related to this, I'm receiving the following error while trying to run the node on an Apple M2 system:
AssertionError: Torch not compiled with CUDA enabled

It would be nice if the node would use the ComfyUI device so both DirectML and MPS would be supported along with CUDA.

I have 96GB RAM, so happy to test if the node works despite being memory hungry.

@deroberon
Copy link
Owner

It's on my plans to implement it. But as right now I just own a pc with nvidia card, it would be a little bit complicated to test if it is working :(

@ttulttul
Copy link
Collaborator

The node implements the diffusers pipeline version of DemoFusion rather than using ComfyUI's own diffusion code. We intend to implement the algorithm using only Comfy code and once that is done, it will work on your Mac.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants