You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I see that model not very good at text conditioned generation. How to improve this situation? Maybe train CLIP model itself, or just train ldm for longer?
The text was updated successfully, but these errors were encountered:
When I trained this on Celeb captions, I also found that for the captions that are very common(like hair color), the trained text conditioned diffusion model was performing very well for them. But for words which weren't quite frequent , the model wasn't honouring them at all.
I suspect training the ldm longer(or getting more images for the infrequent captions) should indeed improve the generation results for them.
You can definitely try training CLIP as well but I feel unless you have very rare words in your captions (or maybe very different from what clip was trained on), training ldm for longer should be more fruitful than training CLIP model.
I see that model not very good at text conditioned generation. How to improve this situation? Maybe train CLIP model itself, or just train ldm for longer?
The text was updated successfully, but these errors were encountered: