Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: unable to find a Visual Studio installation; try running Clang from a developer command prompt [-Wmsvc-not-found] #539

Open
2 of 6 tasks
F0xiiNat0r opened this issue Sep 24, 2024 · 10 comments
Assignees

Comments

@F0xiiNat0r
Copy link

Checklist

  • The issue exists after disabling all extensions
  • The issue exists on a clean installation of webui
  • The issue is caused by an extension, but I believe it is caused by a bug in the webui
  • The issue exists in the current version of the webui
  • The issue has not been reported before recently
  • The issue has been reported before but has not been fixed yet

What happened?

I did a clean install to just try and get it working, I've tried several times but nothing is different. I've done "pip cache purge" between runs, deleted the venv folder and tried to run the bat file again, nothing.

And then the thing is the sysinfo .json file just mentions something about pytorch_lightning.

Steps to reproduce the problem

  1. git clone this repo
  2. run webui-user.bat
  3. error eventually appears

What should have happened?

WebUI should've started fine

What browsers do you use to access the UI ?

Other

Sysinfo

sysinfo-2024-09-24-00-24.json

Console logs

venv "D:\AI\stable-diffusion-webui-amdgpu\venv\Scripts\Python.exe"
WARNING: ZLUDA works best with SD.Next. Please consider migrating to SD.Next.
=============================================================================================================================
INCOMPATIBLE PYTHON VERSION

This program is tested with 3.10.6 Python, but you have 3.12.4.
If you encounter an error with "RuntimeError: Couldn't install torch." message,
or any other error regarding unsuccessful package (library) installation,
please downgrade (or upgrade) to the latest version of 3.10 Python
and delete current Python and "venv" folder in WebUI's directory.

You can download 3.10 Python from here: https://www.python.org/downloads/release/python-3106/

Alternatively, use a binary release of WebUI: https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/tag/v1.0.0-pre

Use --skip-python-version-check to suppress this warning.
=============================================================================================================================
Python 3.12.4 | packaged by Anaconda, Inc. | (main, Jun 18 2024, 15:03:56) [MSC v.1929 64 bit (AMD64)]
Version: v1.10.1-amd-9-g46397d07
Commit hash: 46397d078cff4547eb4bd87adc5c56283e2a8d20
Using ZLUDA in C:\ZLUDA
Installing requirements
Traceback (most recent call last):
  File "D:\AI\stable-diffusion-webui-amdgpu\launch.py", line 48, in <module>
    main()
  File "D:\AI\stable-diffusion-webui-amdgpu\launch.py", line 39, in main
    prepare_environment()
  File "D:\AI\stable-diffusion-webui-amdgpu\modules\launch_utils.py", line 618, in prepare_environment
    run_pip(f'install -r "{requirements_file}"', "requirements")
  File "D:\AI\stable-diffusion-webui-amdgpu\modules\launch_utils.py", line 148, in run_pip
    return run(
           ^^^^
  File "D:\AI\stable-diffusion-webui-amdgpu\modules\launch_utils.py", line 120, in run
    raise RuntimeError("\n".join(error_bits))
RuntimeError: Couldn't install requirements.
Command: "D:\AI\stable-diffusion-webui-amdgpu\venv\Scripts\python.exe" -m pip install -r "requirements_versions.txt" --prefer-binary
Error code: 1
stdout: Collecting setuptools==69.5.1 (from -r requirements_versions.txt (line 1))
  Using cached setuptools-69.5.1-py3-none-any.whl.metadata (6.2 kB)
Collecting GitPython==3.1.32 (from -r requirements_versions.txt (line 2))
  Using cached GitPython-3.1.32-py3-none-any.whl.metadata (10.0 kB)
Collecting Pillow==9.5.0 (from -r requirements_versions.txt (line 3))
  Using cached Pillow-9.5.0-cp312-cp312-win_amd64.whl.metadata (9.7 kB)
Collecting accelerate==0.21.0 (from -r requirements_versions.txt (line 4))
  Using cached accelerate-0.21.0-py3-none-any.whl.metadata (17 kB)
Collecting blendmodes==2022 (from -r requirements_versions.txt (line 5))
  Using cached blendmodes-2022-py3-none-any.whl.metadata (12 kB)
Collecting clean-fid==0.1.35 (from -r requirements_versions.txt (line 6))
  Using cached clean_fid-0.1.35-py3-none-any.whl.metadata (36 kB)
Collecting diffusers==0.30.2 (from -r requirements_versions.txt (line 7))
  Using cached diffusers-0.30.2-py3-none-any.whl.metadata (18 kB)
Collecting diskcache==5.6.3 (from -r requirements_versions.txt (line 8))
  Using cached diskcache-5.6.3-py3-none-any.whl.metadata (20 kB)
Collecting einops==0.4.1 (from -r requirements_versions.txt (line 9))
  Using cached einops-0.4.1-py3-none-any.whl.metadata (10 kB)
Collecting facexlib==0.3.0 (from -r requirements_versions.txt (line 10))
  Using cached facexlib-0.3.0-py3-none-any.whl.metadata (4.6 kB)
Collecting fastapi==0.94.0 (from -r requirements_versions.txt (line 11))
  Using cached fastapi-0.94.0-py3-none-any.whl.metadata (25 kB)
Collecting gradio==3.41.2 (from -r requirements_versions.txt (line 12))
  Using cached gradio-3.41.2-py3-none-any.whl.metadata (17 kB)
Collecting httpcore==0.15 (from -r requirements_versions.txt (line 13))
  Using cached httpcore-0.15.0-py3-none-any.whl.metadata (15 kB)
Collecting inflection==0.5.1 (from -r requirements_versions.txt (line 14))
  Using cached inflection-0.5.1-py2.py3-none-any.whl.metadata (1.7 kB)
Collecting jsonmerge==1.8.0 (from -r requirements_versions.txt (line 15))
  Using cached jsonmerge-1.8.0.tar.gz (26 kB)
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
  Preparing metadata (pyproject.toml): started
  Preparing metadata (pyproject.toml): finished with status 'done'
Collecting kornia==0.6.7 (from -r requirements_versions.txt (line 16))
  Using cached kornia-0.6.7-py2.py3-none-any.whl.metadata (12 kB)
Collecting lark==1.1.2 (from -r requirements_versions.txt (line 17))
  Using cached lark-1.1.2-py2.py3-none-any.whl.metadata (1.7 kB)
Collecting numpy==1.26.2 (from -r requirements_versions.txt (line 18))
  Using cached numpy-1.26.2-cp312-cp312-win_amd64.whl.metadata (61 kB)
Collecting omegaconf==2.2.3 (from -r requirements_versions.txt (line 19))
  Using cached omegaconf-2.2.3-py3-none-any.whl.metadata (3.9 kB)
Collecting open-clip-torch==2.20.0 (from -r requirements_versions.txt (line 20))
  Using cached open_clip_torch-2.20.0-py3-none-any.whl.metadata (46 kB)
Collecting onnx==1.16.2 (from -r requirements_versions.txt (line 21))
  Using cached onnx-1.16.2-cp312-cp312-win_amd64.whl.metadata (16 kB)
Collecting onnxruntime (from -r requirements_versions.txt (line 22))
  Using cached onnxruntime-1.19.2-cp312-cp312-win_amd64.whl.metadata (4.7 kB)
Collecting optimum (from -r requirements_versions.txt (line 23))
  Using cached optimum-1.22.0-py3-none-any.whl.metadata (20 kB)
Collecting olive-ai (from -r requirements_versions.txt (line 24))
  Using cached olive_ai-0.6.2-py3-none-any.whl.metadata (3.6 kB)
Collecting piexif==1.1.3 (from -r requirements_versions.txt (line 25))
  Using cached piexif-1.1.3-py2.py3-none-any.whl.metadata (3.7 kB)
Collecting protobuf==3.20.2 (from -r requirements_versions.txt (line 26))
  Using cached protobuf-3.20.2-py2.py3-none-any.whl.metadata (720 bytes)
Collecting psutil==5.9.5 (from -r requirements_versions.txt (line 27))
  Using cached psutil-5.9.5-cp36-abi3-win_amd64.whl.metadata (21 kB)
Collecting pytorch_lightning==1.9.4 (from -r requirements_versions.txt (line 28))
  Using cached pytorch_lightning-1.9.4-py3-none-any.whl.metadata (22 kB)
Collecting resize-right==0.0.2 (from -r requirements_versions.txt (line 29))
  Using cached resize_right-0.0.2-py3-none-any.whl.metadata (551 bytes)
Collecting safetensors==0.4.2 (from -r requirements_versions.txt (line 30))
  Using cached safetensors-0.4.2-cp312-none-win_amd64.whl.metadata (3.9 kB)
Collecting scikit-image==0.21.0 (from -r requirements_versions.txt (line 31))
  Using cached scikit_image-0.21.0.tar.gz (22.7 MB)
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
  Installing backend dependencies: started
  Installing backend dependencies: finished with status 'done'
  Preparing metadata (pyproject.toml): started
  Preparing metadata (pyproject.toml): finished with status 'error'

stderr:   error: subprocess-exited-with-error

  Preparing metadata (pyproject.toml) did not run successfully.
  exit code: 1

  [17 lines of output]
  + meson setup C:\Users\weiss\AppData\Local\Temp\pip-install-xh5o8o5n\scikit-image_8c48a5cc859245a98af30d842bf78bd5 C:\Users\weiss\AppData\Local\Temp\pip-install-xh5o8o5n\scikit-image_8c48a5cc859245a98af30d842bf78bd5\.mesonpy-jeo3m2v3 -Dbuildtype=release -Db_ndebug=if-release -Db_vscrt=md --native-file=C:\Users\weiss\AppData\Local\Temp\pip-install-xh5o8o5n\scikit-image_8c48a5cc859245a98af30d842bf78bd5\.mesonpy-jeo3m2v3\meson-python-native-file.ini
  The Meson build system
  Version: 1.5.2
  Source dir: C:\Users\weiss\AppData\Local\Temp\pip-install-xh5o8o5n\scikit-image_8c48a5cc859245a98af30d842bf78bd5
  Build dir: C:\Users\weiss\AppData\Local\Temp\pip-install-xh5o8o5n\scikit-image_8c48a5cc859245a98af30d842bf78bd5\.mesonpy-jeo3m2v3
  Build type: native build
  Project name: scikit-image
  Project version: 0.21.0

  ..\meson.build:1:0: ERROR: Unable to detect linker for compiler `clang -Wl,--version`
  stdout:
  stderr: clang: warning: unable to find a Visual Studio installation; try running Clang from a developer command prompt [-Wmsvc-not-found]
  clang: error: unable to execute command: program not executable
  clang: error: linker command failed with exit code 1 (use -v to see invocation)


  A full log can be found at C:\Users\weiss\AppData\Local\Temp\pip-install-xh5o8o5n\scikit-image_8c48a5cc859245a98af30d842bf78bd5\.mesonpy-jeo3m2v3\meson-logs\meson-log.txt
  [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

Encountered error while generating package metadata.

See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details.

Drücken Sie eine beliebige Taste . . .

Additional information

Browser used: Opera GX, but that never played a role since i didn't get that far.

COMMANDLINE_ARGS=--skip-torch-cuda-test --medvram --use-zluda

@lshqqytiger
Copy link
Owner

Downgrade to Python 3.11. There are some dependencies that do not provide wheels for Python 3.12.

@lshqqytiger lshqqytiger self-assigned this Sep 24, 2024
@F0xiiNat0r
Copy link
Author

F0xiiNat0r commented Sep 24, 2024

Downgrade to Python 3.11. There are some dependencies that do not provide wheels for Python 3.12.

Ok it worked but now I get the same issue I've seen in other posts where it just can't load models and/or generate properly for some reason. Could this be an issue of it trying to use my integrated graphics for models even though everything else on my PC is set up to use my dedicated GPU?

@lshqqytiger
Copy link
Owner

need console logs.

@F0xiiNat0r
Copy link
Author

need console logs.

Ok well I booted everything up today, and actually managed to get the model loaded but with the "failed to create model quickly" and it actually started generating, but ended with a CUDA out of memory error.
For reference: My GPU is a Gigabyte RX 7700 XT Gaming OC 12GB
Console log here:
https://pastebin.com/DpTSKxUM

@lshqqytiger
Copy link
Owner

Try again after pulling the latest commits. It will show which device is used in console log.

@CS1o
Copy link

CS1o commented Sep 25, 2024

My GPU is a Gigabyte RX 7700 XT Gaming OC 12GB Console log here: https://pastebin.com/DpTSKxUM

Remove --skip-torch-cuda-test from your webui-user.bat and then delete the venv folder and relaunch.
ALso make sure you have HIP SDK installed and the correct ROCm Lib files for your card.

@F0xiiNat0r
Copy link
Author

F0xiiNat0r commented Sep 26, 2024

Sorry for the wait, I was testing out other options, like ComfyUI-Zluda (which worked with less issues, surprisingly)

Remove --skip-torch-cuda-test from your webui-user.bat and then delete the venv folder and relaunch. ALso make sure you have HIP SDK installed and the correct ROCm Lib files for your card.

I did that now, and have HIP SDK with ROCm 6.1 (temporarily switched to 5.7 for ComfyUI-Zluda)

https://pastebin.com/vj74pH3A
^ In this log I tried to load the same model 3 times, 2 times it gave a "RuntimeError: dictionary changed size during iteration", on the third it worked? but when I just tried to do a quick test gen it first gave me "RuntimeError: invalid argument to reset_peak_memory_stats" almost immediately, then ended the generation with "RuntimeError: invalid argument to memory_allocated"

@F0xiiNat0r
Copy link
Author

If I could still get some advice on what else to try that'd be nice, I'd still love to use this webui.

@CS1o
Copy link

CS1o commented Oct 5, 2024

You have the wrong Python Version installed.
You need to uninstall Python 3.11 and install Python 3.10.11 64bit.
Then also remove the --skip-python-version-check from the webui-user.bat and add --skip-ort --update-check
Then delete the venv and the .zluda folder from the stable-diffusion-webui-amdgpu folder.
Finally relaunch the webui-user.bat

Let me know if it works or what the cmd log then shows if any error appear.

@F0xiiNat0r
Copy link
Author

You need to uninstall Python 3.11 and install Python 3.10.11 64bit.
Then also remove the --skip-python-version-check from the webui-user.bat and add --skip-ort --update-check
Then delete the venv and the .zluda folder from the stable-diffusion-webui-amdgpu folder.

Sorry I forgot to get back to you. It's working a lot better after doing these steps. I think my issue was I think I tried following one of your guides before, where it said to make a C:\ZLUDA folder but I must've messed up somewhere lol
Thanks again!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants