-
Notifications
You must be signed in to change notification settings - Fork 375
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Errors after new version release #195
Comments
I would like to add that I have now done a new instal, twice, and I believe I have the exact same issue as pianogospel. Here is what the command window shows (venv) C:\AI\ai-toolkit>python run.py config/JL_lora_flux_24gb.yaml ############################################# Running job: JL_flux_lora_v1############################################# Running 1 process ========================================
|
@Anothergazz Do you sport a 3090 by any chance? Just a guess from spotting quanto and fp8 in your error log. Ampere does not support float8 and qfloat8 is hardcoded in toolkit/stable_diffusion_model.py. For example: |
Thanks for the suggestion but I splashed out on a 4090 (in a moment of weakness)on widows 11 |
On windows: I too can confirm that all builds published the last few days crush after "Quantizing transformer". Request: I like the fact that this is very experimental, you guys are doing great, take your time, and thanks! my error:
|
I think I was running into the same issue, also with a 4090 and 32gb of RAM. I was looking for something to correct since I've been messing with CUDA tookit and torch versions for a few hours. Ran into this comment: #169 (comment) --- I downgraded to optimum-quanto 0.2.4 and the errors are gone and the batch is running now. 10% in but hoping it will be fine. |
Same issue here on a 3090. Previous version of ai-toolkit still working fine. |
SUCCESS! The latest build as of October 14th worked for me (RTX4090 + 32gb RAM) using: You may need to adjust some directories here if the above installations have not done it automatically, ask an AI to explain: You also need Visual Studio 2022 (not just the build tools) This is the list of all installed packages and their versions within the environment, no more than a couple of them may have been unnecessarily been added/altered during troubleshooting, if you dont know what to do with this list give to an AI and ask to help you find and compare with yours: absl-py==2.1.0 IMPORTANT: As the amazing person above has advised us, the I dont know if this next command enabled my build to work or not but here it is. This was all done in windows CMD in administrator mode to allow caching of all downloads, this is useful for reattempting the installation without downloading the same stuff. Finally, at some point the build failed because of a "long path" type of error. Simply install this project close to your drives root. Thanks everyone! |
One more question: how to downgrade optimum-quanto from 0.2.5 to 0.2.4? |
either |
Thanks inflamously, works flawlessly |
Downgrade Should all dependencies in |
Hi, @dene- Thank you! |
I use pip install timm<=0.9.5 |
I need help, I tried to get help in DISCORD but nobody answered, I already use ai toolkit since august, it works flawlesly. I tried to install the newer version in windows (not an update, a newer installation from the beggining) and I followed all the steps but it simply doesn't work and shows multiple error messages. Can anyone tell me what is the pre requisites for the latest version of ai toolkit? I have Python 3.10.11, nvidia toolkit cuda_11.8.r11.8 , visual studio 2022, cl.exe in environment variable, but it doesn't work. Thanks for any help.
The text was updated successfully, but these errors were encountered: