You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It's same question on title.
Is there any erasure-tokens for any other dataset? (ex. flowers102, imagenet, coco, caltech101)
or how to get erasure-tokens weight for custom dataset ?
The text was updated successfully, but these errors were encountered:
Thanks for your interest in our work! Yes, I'll upload and share these in a google drive link shortly.
There are: (1) SD checkpoints fine-tuned to erase concepts, and (2) tokens specific to these SD checkpoints for DA-Fusion.
I will upload and share both for the missing datasets.
Thank you for reply :)
If you'll excuse me, I have one more question for you.
I see that the embed token is required when generating synthetic images with textual inversion, how to generate a embeding .pt file for a specific dataset? Is it right fine_tune.py? I want to make flowers101 dataset embedding tokens..
Thank you for impressed studies.
It's same question on title.
Is there any erasure-tokens for any other dataset? (ex. flowers102, imagenet, coco, caltech101)
or how to get erasure-tokens weight for custom dataset ?
The text was updated successfully, but these errors were encountered: