OneTrainer is a tool I’ve been working on for a few months now. It can be used to fine-tune models, or train LoRAs and Textual-Inversion embeddings. More things will come in the future.
First the boring part: Where? https://github.com/Nerogar/OneTrainer Discord: https://discord.gg/KwgcQd5scF
Why do we need another tool? Don’t we already have “insert-other-lool-here”? OneTrainer was built from the ground up with a few different goals in mind:
- Easy to use: Installation is done through a single install script. Everything can be done with a simple but powerful UI.
- Extensibility: Some of the other training tools out there are mostly just wrappers around a collection of scripts. OneTrainer is a complete rewrite to make it as simple as possible to add new functionality.
- Powerful options: Even though the UI is built to be easy to use, and you can start training from many different presets, you can also adjust a lot of settings if you know what you are doing.
But what are the actual features?
- Training on top of many different stable diffusion base models: v1.5, v2.0, v2.1, v1.5-inpainting and v2.0-inpainting, with limited SDXL support.
- Different model formats: you don’t need to convert models, just select a base model in diffusers, checkpoint or safetensors format.
- Masked training: optionally create masks for each image, to let the trainer know on which parts of the image it should focus, and which should be ignored.
- Presets: configure your own presets right in the UI, or use one of many built in presets.
- Library management: easily configure where the training images are coming from.
- Data augmentation: you can add random changes to your training samples.
- Full fine-tuning, LoRA training, and textual inversion embedding training.
- Regular samples during training: Set up a list of prompts to sample from regularly.
- Tensorboard: easily track the training progress through a web interface.
- UI and CLI mode: you can export all your configuration from the UI to run the training session from CLI.
- Noise Scheduler Rescaling from the paper “Common Diffusion Noise Schedules and Sample Steps are Flawed” https://arxiv.org/abs/2305.08891
- EMA training: You can train your own EMA model. I also added an “EMA on CPU” mode that doesn’t require any additional VRAM, but still has the benefits of EMA training.
Some technical details:
OneTrainer is build as a set of tools that can be used either from the built in UI or CLI scripts. But it could also be integrated into other existing applications like Colab notebooks. Every part of it is modularized to make this as easy as possible. Probably the biggest improvement over other applications is the data processing pipeline. I build MGDS (https://github.com/Nerogar/mgds) from the ground up based on the idea of a graph based pipeline. This is similar to the graph of ComfyUI, but with a focus on training, instead of image generation. One big advantage of this approach is the possibility of re-using the same code for many different training tasks without the need to copy and paste code.
Huge thanks to u/th3Raziel a.k.a. devilismyfriend for his work on StableTuner. Without his work, OneTrainer would not exist.
I am planning on cooking a LORA today - I’ll give this a go and report back.
please do, I thinking to start making LoRa’s as well and the tool looks like it would make the process much easier. Let me know how it goes for you.
Interesting
That sounds awesome. I’m have to give it a shot. I haven’t messed with creating a model since dream both got all janky several months ago.
Weird, trying to install it. After installation and running for the first time, its giving me a me a bunch of errors, starting with a matching Triton is not available