Stable Diffusion XL (SDXL) DreamBooth: Easy, Fast & Free | Beginner Friendly

preview_player
Показать описание
In this video, I'll show you how to train amazing dreambooth models with the newly released SDXL 1.0! In addition to that, we will also learn how to generate images using SDXL base model and the use of refiner to enhance the quality of generated images.

The commands used for training in this video are as follows:

!pip install -U autotrain-advanced

!autotrain setup --update-torch

!autotrain dreambooth \
--model stabilityai/stable-diffusion-xl-base-1.0 \
--output output/ \
--image-path images/ \
--prompt "photo of sks dog" \
--resolution 1024 \
--batch-size 1 \
--num-steps 500 \
--fp16 \
--gradient-accumulation 4 \
--lr 1e-4

On google colab, you can add --use-8bit-adam parameter and change resolution to 512 if you are on free version of google colab.

Inference code:

Please subscribe and like the video to help me keep motivated to make awesome videos like this one. :)

Follow me on:
Рекомендации по теме
Комментарии
Автор

Please subscribe and like the video to help me keep motivated to make awesome videos like this one. :)
Training part is in video description!

abhishekkrthakur
Автор

doest work. mem error in last step. tried on colab as well gcp office machin

magicmushroom
Автор

A beginner's question maybe, "How can I train the model on different things" like this dog example you shared. How can I build on that and add more concepts for the same model.

ahmedtremo
Автор

Thanks for getting this up so quick!!

A couple notes:
1. it's !autotrain setup --update-torch (not upgrade)
2. getting the error 'RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'' when trying to run dreambooth training specifically after '> INFO Computing text embeddings for prompt: subject wearing sks' is printed

Any ideas?

the_hero_shep
Автор

I can't make it work on Kaggle !

--output parameter doesn't exist
--project-name parameter is required

I have solved the 2 parameters problems but I have this error :

RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'

jva_youtube
Автор

Very helpful tutorials on the latest updates Abhishek, thanks!!

rushirajparmar
Автор

Great tutorial! So if I'm following the idea here, you are training to make an object (the dog in the pictures) to be prompted into different styles basically. But what about if you want to train it for a style that you can turn any object into that style? So basically how does the training differ if you want to train it to output a specific style across different objects?

JoniRuotsalainen
Автор

I don't see the option to select after creating a new space with docker and autotrain.. It's running on A10G but the backend options are totally different and there's no option to select projecttype. The tabs available to me are llm, text, and tabular

eughbrother
Автор

Can you help me?


I'm trying to use the huggingface "AutoTrain_Dreambooth" notebook on Kaggle, with two T4 Gpus, to train faces on sdxl
Worked, but it's only using one gpu
I tried to launch using the accelerator
changing !autotrain dreambooth \
to !accelerate launch autotrain dreambooth \
But this error message appears
"can't open file 'kaggle/working/autotrain': [Errno 2] No such file or directory"

How can i fix that?

Or is there any other way to force the notebook to use two Nvidia t4??

Sorry to bother you

ai_and_gaming
Автор

how we can train a DreamBooth model with multiple "characters"?Please guide

VikashKumar-wvb
Автор

Hi everyone! I have a noobie question.
AutoTrain advanced CLI: error: unrecognized arguments: --output model/
How to fix this error and specify the output model?

apomazkov
Автор

fyi: the model training took around 20mins with 16GB gpu memory.

abhishekkrthakur
Автор

Thank you for a great video. I got some problem. When training is end and it's time to save lora weights and checkpoint, i get cuda out of memory. Do you have any idea how to fix it?

solution: use --checkpointing-steps to save only lora weights

alexalex-lzsg
Автор

Amazing work, I am going to try this on a local machine I am buying new GPU for this and I will need to used it for local LLMa what GPU you think will work the best and I am going to try this on FPGA next just see how this can perform.

osama
Автор

holy shit. can you do another one in a1111 or something?

sitr
Автор

thanks for this nice tutorial and the autotrain-advanced tool. I am able to reproduce everything you do. One slight question tho; I'm new to ML & SD so I don't understand everything in detail. What we train here; is just a LORA weight right? not a full checkpoint. (since the size is also very small) - I've copied this file into "loras" directory in comfy-ui, set the base model as sdxl and load this one as lora, however it doesn't seem to do anything when I prompt "sks man", I just get a random image and hundreds of "lora key not loaded errors - is this not supposed to work as standalone lora?

MuhammetCan
Автор

Is a 4070Ti 12gb enough? I've seen that the recommended setup is 16gb vram. I don't care if it takes longer, what I don't want is to run into too many OOM errors

pedroj.martinez
Автор

how to add different objects to same model like cat, dog, car etc

arsalanarsalan
Автор

In the last step, it is showing me an error in the first line, "diffuser" not installed. What to do? I am new to this.

PingPongReview
Автор

i see that we have a --resume-from-checkpoint parameter. can we have continue training one model that we start in other colab session? i trained one model until 1300 steps but i think we need more train. if i can resume this checkpoint i will not spen a lot of hours in processing data. i tryed add this parameter in colab and local but allways get error.

vokbr