Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is my AMD card? RX580 8GB #1

Open
KillyTheNetTerminal opened this issue Dec 18, 2023 · 4 comments
Open

Is my AMD card? RX580 8GB #1

KillyTheNetTerminal opened this issue Dec 18, 2023 · 4 comments

Comments

@KillyTheNetTerminal
Copy link

FETCH DATA from: C:\Users\WarMachineV10SSD3\Music\ComfyPortable\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json
got prompt
text_encoder\model.safetensors not found
Loading pipeline components...: 100%|████████████████████████████████████████████████████| 8/8 [00:17<00:00, 2.23s/it]
ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
File "C:\Users\WarMachineV10SSD3\Music\ComfyPortable\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "C:\Users\WarMachineV10SSD3\Music\ComfyPortable\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "C:\Users\WarMachineV10SSD3\Music\ComfyPortable\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "C:\Users\WarMachineV10SSD3\Music\ComfyPortable\ComfyUI_windows_portable\ComfyUI\custom_nodes\StableZero123-comfyui\stablezero123.py", line 103, in execute
pipeline.to('cuda:0')
File "C:\Users\WarMachineV10SSD3\Music\ComfyPortable\ComfyUI_windows_portable\ComfyUI\venv\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 864, in to
module.to(device, dtype)
File "C:\Users\WarMachineV10SSD3\Music\ComfyPortable\ComfyUI_windows_portable\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
return self._apply(convert)
File "C:\Users\WarMachineV10SSD3\Music\ComfyPortable\ComfyUI_windows_portable\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "C:\Users\WarMachineV10SSD3\Music\ComfyPortable\ComfyUI_windows_portable\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "C:\Users\WarMachineV10SSD3\Music\ComfyPortable\ComfyUI_windows_portable\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 820, in apply
param_applied = fn(param)
File "C:\Users\WarMachineV10SSD3\Music\ComfyPortable\ComfyUI_windows_portable\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
File "C:\Users\WarMachineV10SSD3\Music\ComfyPortable\ComfyUI_windows_portable\ComfyUI\venv\lib\site-packages\torch\cuda_init
.py", line 239, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

Prompt executed in 18.99 seconds

@deroberon
Copy link
Owner

Exactly. For now the implementation just supports CUDA and AMD cards uses ROCm, if I'm not wrong.

@KillyTheNetTerminal
Copy link
Author

Ohh.. Okay, thanks for the information. There is a chance in the future to work with DirectML?

@deroberon
Copy link
Owner

deroberon commented Dec 18, 2023

In fact... I don't have an AMD card to try... but I was reading about pytorch and rocm and it seems that nowadays it is 100% compatible with cuda implementations. What makes me think if the problem you are facing isn't because you really have in your python env a version of pytorch that was not pre-compiled with cuda support as the error message says.

How exactly did you install your version of comfyui? Are you running it in what OS? If you are not using the portable comfyui version, how do you install pytorch in your python env? And if you are using the portable version, how are you starting comfyui? with run_nvidia_gpu or run_cpu?

@NeedsMoar
Copy link

AMD on Windows still uses DirectML. It's incompatible with CUDA / torch versions targeting CUDA and missing a lot of basic functionality (and speed). I'm not sure if AMD has even released all of the libraries needed to build a custom build of pytorch rocm on Windows yet, and when they do it'll probably be months before there's an official torch build.

DirectML is fundamentally useless in its current state when used from Python, anyway. It can run ONNX models quickly but everything else has zero memory management and near-unusable speeds for image sizes above ~1024x1024.

@github-staff github-staff deleted a comment from aviddv1 Aug 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants