-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
evaluate the fine-tuning textual inversion #2
Comments
Hello pribadihcr, Once you have performed textual inversion using this script (https://github.com/brandontrabucco/da-fusion/blob/main/fine_tune.py), we have created a utility to check the generations for visual inspection. https://github.com/brandontrabucco/da-fusion/blob/main/generate_images.py There are three arguments that you will need to change:
Let me know if you have other questions I can help with! -Brandon |
Hello @brandontrabucco, |
Hello pribadihcr, Looking at the script, we modified it to only save the https://github.com/huggingface/diffusers/blob/main/examples/textual_inversion/textual_inversion.py Once this extra state is saved, you can pass to If you want to avoid re-training, and use the current embeddings, you can also modify this line of code to load and use the current fine-tuned embeddings instead of the embedding of the initializer token: if args.resume_from_checkpoint is not None:
token_embeds[placeholder_token_id] = torch.load(
args.resume_from_checkpoint)[args.placeholder_token]
else:
token_embeds[placeholder_token_id] = token_embeds[initializer_token_id] Let me know if you have additional questions! Best, |
Hi,
how to evaluate fine-tuning textual inversion good or not? thanks
The text was updated successfully, but these errors were encountered: