-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problem about training #22
Comments
I wonder it too. |
Hi @Kangkang625
First, you should pre-train the inversion adapter, keeping all the other weights (including the unet) frozen. I hope this clarify your doubts |
Thanks for your answer @ABaldrati ! According to my understanding, the unet should be extended based on the unet of stable diffusion pipeline. Thanks again for your great work and detailed answer! |
When we pre-train the inversion adapter we use the standard Stable Diffusion inpainting model. In this phase we do not extend the unet |
Hi,thank you for your great work!
I was trying to wrtie train code and do some training, but I was confused by the We first train the EMASC modules, the textual-inversion adapter,and the warping component. Then, we freeze all the weights of allmodules except for the textual inversion adapter and train the proposed enhanced Stable Diffusion pipeline in 4.2, should I first freeze other weights including unet and train textual inversion adapter or should I free other weight and train textual inversion adapter and unet together。
The text was updated successfully, but these errors were encountered: