Need help on training LoRA model #10652
Unanswered
CaledoniaProject
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I'm trying to train LoRA with human faces then create photo with existing txt2img models.
Environment
Training data and results models
Training steps
I first use BLIP to caption the images, and use
img,
as prefix / trigger word.And confirm the caption is succeed:
Then I started the training process
stabilityai/stable-diffusion-xl-base-1.0
/data/crop-test/model
/data/crop-test/img
No half VAE
The training process is successful:
Generation steps
I first copied /data/crop-test/models/last.safetensors to /data. Then I create a pipeline script to create outputs with trigger word
img
After a while the results is generated. But it's nothing like the inputs.
These are the the inputs,
These are the outputs,
I have completely no idea what's wrong, can't find anything useful online, chatgpt didn't help either.
Does anyone know what's wrong?
Beta Was this translation helpful? Give feedback.
All reactions