SDXL LORA Training Without A PC: Google Colab and Dreambooth

preview_player
Показать описание


This guide contains everything you need to train your own LORA or Low Rank Adaptation model for Stable Diffusion XL (SDXL) using Google Colab. That's right, you can train this for free without the need for a high end gaming PC. This guide will allow you to train SDXL to generate images of yourself, or anyone else for that matter.
Рекомендации по теме
Комментарии
Автор

Just a note: 99% of the time, free GPU connection is not available on Google Colab. For that, user must change the setting from pf16 to bf16.

bahethelmy
Автор

Wanted to say thank you for this video! I've been looking for a tutorial like this one and most of what I found is total BS on how to make money with fake influencers. Appreciate your detailed explanation and scientific approach to the topic

nadiaprivalikhina
Автор

I have tried following this guide step for step but my LORA doesnt do anything. I can download other loras and add, and they work perfectly, but not when i add my own.
I am running Fooocus 2.2 on a google colab machine. My model is and the lora is trained stable-diffusion-xl-base-1.0.
I followed the guide 1-1 and used Dreambooth lora, with 8 pictures pf a celeb and added the prompt being the name of the used celeb person. The training takes around 2 hours and completes correctly - but when used on my fooocus it looks nothing like my Lora :( Can you help us?

qzcckoj
Автор

Your issue with style is to forget to uncheck Foocus V2, Enhance and sharp.They drive the model toward realism.

RdBubbleButNerfed
Автор

For anyone having the problem of Fooocus not loading your model:

The current version of the script should output two files. One of them ends in "kohya". That one will work with Fooocus, the other one is the wrong format.

kronostitananthem
Автор

my lora is ignored when i generate (2) ☹ Please help! I did the same process before. it worked. but it is not working now. There are few changes in auto train interface ( ex: there is something in training parameters section ("vae_model": "", ). idk what that is!

Akashwillisonline
Автор

Coming from your awesome Focus colab tutorial! When it finishes doing the steps thing, it kept repeating something along the lines of “Running jobs: [], ” followed by “Get /is_model_training HTTP/1.1” on the output for a few hours. Is it supposed to do that, because my dataset contains around 50-100 images.

MindSweptAway
Автор

Thanks for the video. Concise and very clear. But I am facing an issue, which from comments, many others are facing as well. I have created a LoRA using the above instructions (not on colab, but on GCP VM), but when I tried to use it on Fooocus with sd_xl_base_1.0 as the base model, the LoRA does not get loaded. Other LoRAs downloaded from civitai get loaded and work perfectly.

On debugging, I found that fooocus is expecting LoRA keys in the following format:
'lora_unet_time_embed_0', 'lora_unet_time_embed_2', 'lora_unet_label_emb_0_0', 'lora_unet_label_emb_0_2', 'lora_unet_input_blocks_0_0', 'lora_unet_input_blocks_1_0_in_layers_0', 'lora_unet_input_blocks_1_0_in_layers_2', 'lora_unet_input_blocks_1_0_emb_layers_1', 'lora_unet_input_blocks_1_0_out_layers_0', 'lora_unet_input_blocks_1_0_out_layers_3', 'lora_unet_input_blocks_2_0_in_layers_0', 'lora_unet_input_blocks_2_0_in_layers_2', 'lora_unet_input_blocks_2_0_emb_layers_1', 'lora_unet_input_blocks_2_0_out_layers_0', 'lora_unet_input_blocks_2_0_out_layers_3', 'lora_unet_input_blocks_3_0_op', 'lora_unet_input_blocks_4_0_in_layers_0', 'lora_unet_input_blocks_4_0_in_layers_2', 'lora_unet_input_blocks_4_0_emb_layers_1', 'lora_unet_input_blocks_4_0_out_layers_0', 'lora_unet_input_blocks_4_0_out_layers_3', 'lora_unet_input_blocks_4_0_skip_connection', 'lora_unet_input_blocks_4_1_norm', 'lora_unet_input_blocks_4_1_proj_in', 'lora_unet_input_blocks_4_1_transformer_blocks_0_attn1_to_q', 'lora_unet_input_blocks_4_1_transformer_blocks_0_attn1_to_k',


Whereas the actual keys in the LoRA are in a slightly different format:
'unet.down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_k.lora.down.weight', 'unet.down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_k.lora.up.weight', 'unet.down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_out.0.lora.down.weight', 'unet.down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_out.0.lora.up.weight', 'unet.down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_q.lora.down.weight', 'unet.down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_q.lora.up.weight', 'unet.down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_v.lora.down.weight', 'unet.down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_v.lora.up.weight', 'unet.down_blocks.1.attentions.0.transformer_blocks.0.attn2.to_k.lora.down.weight', 'unet.down_blocks.1.attentions.0.transformer_blocks.0.attn2.to_k.lora.up.weight', 'unet.down_blocks.1.attentions.0.transformer_blocks.0.attn2.to_out.0.lora.down.weight',

@allyourtechai do you know how to resolve this issue? Or anyone else, can anyone help in resolving this? Thanks!

apoorvmishra
Автор

excellent tutorial! Sadly, Google colab keeps shuting down in the middle of the training... like at 64% (training only 10 images). I tried this for several days. Any solution? anyone? thanks in advance!

nicolas.c
Автор

wouldn't it be better to save a checkpoint every so often to google drive? i know i will come back and it will be disconnected and the lora file will be gone

olvaddeepfake
Автор

Hi, thanks for the tutorial
I tried generating lora with same method with 24 images. But when I tested it on fooocus it didn't work.
It's not at all generating the image it is trained on

palashkumbalwar
Автор

My LoRA isn't working, I trained it with 11 images, I tried using both celebs and my own token in the prompt but it still doesn't work as intended. I use A1111, and the base SDXL1.0 model but the results look nothing like me (each generation is a completelly different man, It goes from old white guy, to asian kid, to muscular black man). I don't know what I'm doing wrong, any suggestions?
I also tried using other LoRAs (not trained by me) and they all work beautifully

culoacido
Автор

Awesome tutorial, thank you so much for sharing this video. Its going to help a lot of people like me with crappy GPUs

monstamash
Автор

My Google Colab is stuck on this error after getting to loading the 4/7th pipeline component:

INFO: - "GET /is_model_training HTTP/1.1" 200 OK
INFO: - "GET /accelerators HTTP/1.1" 200 OK

It repeats this every seconds. Help pls?

ShashankBhardwaj
Автор

What can you do if the trained lora model is not visible in Stable Diffusion automatic 1111? Other xl loras are visible.

nobody_dude
Автор

Great tutorial ! clear and to the point. Anyone know if you can input .txt files with captions instead of the <enter your prompt here> ? Cheers

edmoartist
Автор

my results didn't come out that well, any troubleshooting? I got images of like other people ( didn't look like the person I put in or celebrity)

ryxifluction
Автор

@allyourtechai hey man, in this video what you did, so in this process dreambooth and lora both works

CodeMania-ye
Автор

Thnx! I love this kind of video! I love automatic1111

TitohPereyra
Автор

Man you're the best! I have a question, if the training got interrupted/stopped by accident, do I need to start everything all over agian?

ChrisChan