UPDATED: SDXL Local LORA Training Guide: Unlimited AI Images of Yourself

preview_player
Показать описание


This guide contains everything you need to train your own LORA or Low Rank Adaptation model for Stable Diffusion XL (SDXL) using your home PC. This guide will allow you to train SDXL to generate images of yourself, or anyone else for that matter.

Parameters: scale_parameter=False relative_step=False warmup_init=False

--- If you get an error : "No data found. Please verify arguments"

--- Click the "Prepare training data" button on the Dataset Preparation tab.
---------------------------------------------------------------
--- if you get an error: Creating venv...
Python was not found; run without arguments to install from the Microsoft Store, or disable this shortcut from Settings Manage App Execution Aliases.

--- it is because you need to check the box "add python to system path" when you install python
---------------------------------------------------------------------------
--- The following folders do not match the required pattern number_text:

--- If you click on the TOOLS tabs and fill in the info there (location dataset images, reg images if you have any and then enter a folder in the DESTINATION TRAINING DIRECTORY box and click PREPARE TRAINING DATA it'll create the right structure and move all the files into there for you. Then click COPY INFO TO FOLDERS TAB to make sure you have the right info in the right place.
Рекомендации по теме
Комментарии
Автор

For those of you wondering why he has the Network Rank (Dimension) so high (256), I am fairly certain it is because of the thousands of reference images to men he is using in his training. If you arent using that many pictures and are just sticking to your 15-50 reference pictures you are probably fine to leave it on 32-64 unless your training something less human like. This will also cut down on your training time immensely! Also yes, unless you want your training to stop early (around 3 out of the 10) then make sure to change your "Max trained Steps" to 0, (default is 1600). Good video though Thank you!

undertalebob
Автор

What I would do different is to use WD14 captioning since it has more details on the picture. Also my setting need only one hour training on my 4060. I have to check the difference. Here is a nice trick: Once the training is done, it saves a setting file in the result folder. So if you need to train another model, you just need to load the setting file again, change the pictures, captions and model name and then hit start. :-)

metanulski
Автор

Great video. Clear, no hype and to the point. Thanks.

Artp
Автор

thats probably the best and most useful LoRa guide I've seen so far. thank you very much it helped me alot!

OCGamingz
Автор

Thank you for producing this video it has helped me tremendously to figure out the training settings. I realized that the LORA that I trained without any regularization images look better than those that have. Been having great fun rendering many iterations of my alter-egos.

weeliano
Автор

Bro, this tutorial is sooo straight forward, and I really appreciate that you were taking the time to do an updated version, AI world is evolving so fast that the tutorials that were made 6 months ago are outdated, the interface for kohya changed a little bit and your tutorial just walk me through the new version step by step... I just clicked Start Training and I'm waiting to finish and star to run and check how the LoRA comes out.

Thanks again!

maxdeniel
Автор

Exactly what I was looking for. Thank you.

Mranshumansinghr
Автор

I noticed your Optimizer was on Adafactor by default. Mine wasn't so I changed it. You didn't mention the setting for LR Scheduler, but I see in the video yours is set to Constant. Mine was set to Cosine. I changed it to match yours, but my LoRa's came out goofy and I got 3 instead of 10 somehow. Could that have anything to do with it?

jasonlisenbee
Автор

Thank you so much, I'm so happy you updated this! However, I can't seem to find your low VRAM config file. The patreon link only leads to the 3090 one along with the regularization files. I may have missed something (and it's not a big deal) but I thought I'd bring it up just in case. Thanks

lilly
Автор

Very good tuto ! Everything worked on my end, I just had to create the “log”, “images” and “models” folders, as it didn't do it automatically.
My model works perfectly, thank you! 🙏

zei_lia
Автор

thanks for the tutorial... one thing I did differently was use that BLIP2 for captioning which IMO did a much more detailed caption of the image... at that point I didn't have a prefix so I used ChatGPT to make me a windows bat file, to add the prefix (trigger word) to all the txt files. Great tutorial, thanks again!

NateMac
Автор

I think mine stopped. I checked it after maybe an hour and it said it was complete, but I only had 3 finished files, not 10. And they were named Final 1, 2, and 3 which is strange. I closed the command window and they're all pretty bad. I've got 16vram and matched the Network Rank to the numbers shown in the video. I'm wondering if that was a mistake. I'm trying it again but have lowered it to 101 and 13 on the Network Alpha and going to bed to see what I come back to in the morning.

jasonlisenbee
Автор

Does anyone know how to make this work on colab?

Fanaz
Автор

This video was really good but I was wondering why you had the Network Rank at 256 while you had the Network Alpha at 1, which is a really small value when compared to the network rank? I've seen people use 64, 32 a 2:1 ratio or just use the same number for both, I'd love to hear your explanation!

birbb
Автор

I'm completely new to all of this. Will this work using sd3_medium as the pretrained model or should I stick with your template from your Patreon for SDXL base 1.0?

NickPanick
Автор

Thanks for the tutorial! Your last one gave me my best results so was excited to try this. Question: I only got 3 tensor files after 10hrs. They’re all quite big (over 1 gb). Not sure where I went wrong? I have epochs set to 10 like you said. Thanks!

RayMark
Автор

lol i was just getting confusing with your older tutorial!
thanks for the update.

jungbtc
Автор

EDIT: for some reason it started working just fine, I have no idea what I did to it. I think it ok, I have to do more testing with the optimizers; so far, the training is - 30 training pics / 20 reps / 5 Epochs / Rank 32 / Alpha 16 = 3~4hrs

Thank you for the tutorial, I really hoped I could create LoRAs. I followed it to the letter and I get a RuntimeError: NaN detected in latents. I'm on a brand new 4070 and the resolution is 512, 512, so I should have enough vram for it.

mnedix
Автор

@allyourtechai can you do this in comfyui instead of kohya?

El_Rey_Diamante
Автор

Can you make a tutor on how to train Lora slider, such as Lora detailer, detail tweaker etc.

satoshidarikotamasara