Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

8007000E Not enough memory resources are available to complete this operation. #10

Open
muhademan opened this issue Oct 10, 2022 · 3 comments
Labels
bug Something isn't working

Comments

@muhademan
Copy link

Describe the bug

when I run the dml_onnx.py file in (amd_venv) C:\amd-stable-diffusion\difusers-dml\examples\inference>dml_onnx.py

I get an error like this:
(amd_venv) C:\amd-stable-diffusion\difusers-dml\examples\inference>dml_onnx.py
Fetching 19 files: 100%|███████████████████████████████████████████ | 19/19 [00:00<00:00, 1966.19it/s]
2022-10-10 10:05:28.0893026 [E:onnxruntime:, inference_session.cc:1484 onnxruntime::InferenceSession::Initialize::<lambda_70debc81dc7538bfc077b449cf61fe32>::operator()] Exception during initialization: D:\a_work\ 1\s\onnxruntime\core\providers\dml\DmlExecutionProvider\src\BucketizedBufferAllocator.cpp(122)\onnxruntime_pybind11_state.pyd!00007FFA82D145C8: (caller: 00007FFA8336D326) Exception(1) 800 7000E to enough memory resources tid(4790) complete this operation.

Traceback (most recent call last):
File "C:\amd-stable-diffusion\difusers-dml\examples\inference\dml_onnx.py", line 220, in
image = pipe(prompt, height=512, width=768, num_inference_steps=50, guidance_scale=7.5, eta=0.0, execution_provider="DmlExecutionProvider")["sample"][0]
File "C:\amd-stable-diffusion\amd_venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "C:\amd-stable-diffusion\difusers-dml\examples\inference\dml_onnx.py", line 73, in call
unet_sess = ort.InferenceSession("onnx/unet.onnx", so, providers=[ep])
File "C:\amd-stable-diffusion\amd_venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 347, in init
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "C:\amd-stable-diffusion\amd_venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 395, in _create_inference_session
sess.initialize_session(providers, provider_options, disabled_optimizers)
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Exception during initialization: D:\a_work\1\s\onnxruntime\core\providers\dml\DmlExecutionProvider\src122)\onnxruntime\core\providers\dml\DmlExecutionProvider\src122). onnxruntime_pybind11_state.pyd!00007FFA82D145C8: (caller: 00007FFA8336D326) Exception(1) tid(4790) 8007000E Not enough memory resources are available to complete this operation.

how to fix it ?

Reproduction

No response

Logs

No response

System Info

Python 3.7.0 , windows 10 pro 21H2
@muhademan muhademan added the bug Something isn't working label Oct 10, 2022
@Patometro06
Copy link

Patometro06 commented Oct 13, 2022

This works for me:

Page File to 32 GB.

and edit the file:
examples\inference\dml_onnx.py
in the line "image = pipe(prompt, height=512, width=512, num_inference_steps=50, guidance_scale=7.5, eta=0.0, execution_provider="DmlExecutionProvider")["sample"][0]"
to
"image = pipe(prompt, height=256, width=256, num_inference_steps=45, guidance_scale=8, eta=0.0, execution_provider="DmlExecutionProvider")["sample"][0]"

Edit the file:
examples\inference\save_onnx.py
in the line "convert_to_onnx(pipe.unet, pipe.vae.post_quant_conv, pipe.vae.decoder, text_encoder, height=512, width=512)"
to
"convert_to_onnx(pipe.unet, pipe.vae.post_quant_conv, pipe.vae.decoder, text_encoder, height=256, width=256)"

Do: python save_onnx.py "For rebuild the data"

And try again: python dml_onnx.py

Not forgged stay logged with huggingface-cli

In my case a run it on an old A4-5050 quadcore with a RX 560 4GB VRAM OC, but is not full compatible.
Dml_0001

@brentleywilson
Copy link

@Patometro06 is there a way to do it without sacrificing image dimensions ?

@Patometro06
Copy link

@Patometro06 is there a way to do it without sacrificing image dimensions ?

A don't know, maybe with another version, but it wil better use more than 1 GPU for have more VRAM.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants