-
-
Notifications
You must be signed in to change notification settings - Fork 152
Onboarding Guide for Newcomers
Welcome to OneTrainer!
OneTrainer is your all-in-one solution for training diffusion models. Though the user interface (UI) looks simple, it can be deceptive. This guide provides a clear walkthrough to help you navigate the UI and set your training settings as a beginner.
Please note, this is not a comprehensive training guide. It focuses mostly on the perspective of trying to train a LoRA
In the top left, next to the "OneTrainer" logo, you'll find a blank dropdown list for 'configs' (presets). As a beginner, select the one you want to train.
Below that, there's a tab bar with the active tab highlighted in blue. Click on the general
tab.
Define the filepath for the Workspace Directory and the Cache Directory if they aren't set already. For now, don't change any other settings on this page. If you have an RTX 4090, consider increasing the dataloader threads to 8 (be cautious, as setting this too high can cause VRAM issues).
Navigate to the model
tab and set the Base model
with its full path, for example: C:\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned.safetensors.
Next, set the Model Output Destination
. This will be the filename of your trained output, for example: C:\stable-diffusion-webui\models\Lora\Astronaut-riding-a-horse.safetensors.
The filename you set here will be used in A111/Comfy/Forge/Invoke to activate the LoRA. For example, in A111, you would enter <Astronaut-riding-a-horse:1>
.
Navigate to the data
tab, and ensure everything is toggled on (these should be on by default). As a beginner, you want all of these options enabled.
Prepare your dataset with images and captions, either as separate text files or in the image names. While captions are optional, they are recommended. 90% of the work is gathering quality, diverse images and creating high quality (and varied) captions.
You can also use the Tools tab to open your dataset and generate captions using auto captioners/taggers, but this is beyond the scope of this guide.
Once your dataset is ready, navigate to the concepts
tab. Click on add concept
, then click on the newly added item. This will open a new modal (window).
In Path
provide the path to your dataset. In the Prompt Source
, indicate how you captioned your images. As a beginner you should do img-txt file pairs, which is targeted by setting "From text file per sample" and creating the file pairs i.e 001.jpeg
& 001.txt
For more information on concept options, check the dedicated Concept page.
You may click on the training
tab but we reccomend sticking with the default values for now. Check this page for more information
Optional but useful. Sampling generates images using your currently-being-trained model, allowing you to visually observe its progress. As a beginner, you might not know what to look for yet.
For more information, check this page.
Next click on the LoRA
tab
LoRA rank
: Leave it at the default value of 16 for SD1.5, for SDXL try 8 or 16, bigger does not equal better, larger ranks more easily overtrain.
Leave the LoRA alpha
at whatever the default value of 1.0, it only multiplies the weights of the model. Whenever you modify it, you must also modify the Learning Rate.
Hit the big Start Training
button, you can see the training progression bottom left or monitor it via the CLI or more indepth via clicking on the big Tensorboard
button.
Finally test the LoRA with inference software. Does it perform as you expect? Congraluations! If not, welcome to the world of diffusion. Its an interative process. Whilst extensive testing is beyond the scope of this guide here is a keyword to search for:
XYZ grid extension A111