Replies: 1 comment
-
Tried everything to get zluda working. you have to disable cudnnn, that's possible with torch, but not with onnx runtime. So no zluda for this app :( This is my directml benchmarks, with different torch, biggest problem is onnx directml doesnt handle more then 1 thread. so this will be it nfo python: 3.10.6 • torch: 2.3.1+cu118 • gradio: 4.44.0 onnxruntime-directml 1.20.1 torch-directml 0.2.5.dev240914 info python: 3.10.6 • torch: 2.4.1+cpu • gradio: 4.44.0 onnxruntime-directml 1.20.1 torch-directml 0.2.5.dev240914 info python: 3.10.6 • torch: 2.3.1+cu118 • gradio: 4.44.0 onnxruntime-directml 1.20.1 info python: 3.10.6 • torch: 2.1.2+cu118 • gradio: 4.44.0 onnxruntime-directml 1.15.1 everything newer got me memory leaks, and crashes. |
Beta Was this translation helpful? Give feedback.
-
Anyone had any success story using either ROCm or ZLUDA with AMD?
Can't seem to get any decent performance out of DirectML on AMD.
Beta Was this translation helpful? Give feedback.
All reactions