DREAMBOOTH: Easiest Way to Train an AI Model for Stable Diffusion

preview_player
Показать описание
Update:
If you want to use the non standard model for 1.5 training then you can grab the name form hugging space such as XpucT/Deliberate, and use the word 'main" for the branch.

Instead of putting the trigger word in for the class you can use instead like "photo of a man" or "photo of a person" I've had better results with that.

I still recommend 1600 steps

This is the easiest way to train an AI model with or without a GUI for Stable Diffusion.

I wanted to try a similar thumbnail to Aitrepreneur as I really love the way they do thumbnails.

Other Great Resources:
Рекомендации по теме
Комментарии
Автор

This the only process what gave me results thx so much ur da goat!

norbzys
Автор

Hey! Great tutorial. I wanted to ask in-depth about what I need to do with AI training and see if you can give me a hand. I've been generating 3D models of some characters and also making scenes with them. For example, one running. I've been looking for a way to create these scenes without having to 3D render each one. So, I've tried putting images of these characters in AI to make scenarios using them as a base, but I haven't been successful. What would you say is the best approach to solve this problem? Is it even possible to achieve what I'm asking with AI? Thanks a lot for your response.

sdkjasdnap
Автор

Any idea on why I have the "MessageError: RangeError: Maximum call stack size exceeded." error when uploading images for training?
edit: The issue was coming from Safari, can't upload images with safari... great.

jonathaningram
Автор

Thx for the vid Russell. At 05:50 I understand how to use a trigger word in prompting (I'm using Auto1111 locally), but when training my LoRA's, I don't understand where to _set_ the trigger word. I'm confused by what you're saying here that you went back and "I used the trigger word rkkgr". Where did you do that? Where \ how did you set it? Is the trigger word the Instance Prompt? I can see how you later -used_ that trigger, but not where you actually set it.

salacious
Автор

Hey! What a great video, Russel! Thank you!
Have a question: why Collab is better than just using Stable Diffusion on local files? Maybe I just didn't understand something in codes and so on, but it's look like similiar interfaces...

BucharaETH
Автор

5:55 where did you use the trigger word and what is the word exactly bc it is hard to understand. Thanks

jonhylow
Автор

Bagaimana cara melatih dengan model yang berdeda ?

Misalnyaa aku ingin melatih dengan model Chilloutmix atau Deliberate ?

Apakah ada caranya . 😃

Terimakasih

nicoasmr
Автор

Does this technique only work for creating a person? Can i use this to create something like an Achitech design? Or maybe something like a normal map for skin texture?

DrysimpleTon
Автор

is it possible to run this without having a GPU? or on a virtual machine with just CPU?
i have images which mostly look similar, is it better if we have variety in datasets or it also works with similar looking data?

SwathiK-cvwq
Автор

How fix "404 Client Error: Not Found for url (name of the model git)"? only work fine stable diffussion model to me.

Juninholara
Автор

So when you train your own images does it go into their data set?

brandonharper
Автор

is it possible to use a model from civitai or some other external site? Hugging face doesn't have the best models.

mdohdco
Автор

Could it be that you are getting album covers because your class_prompt isn't saying that it is a person?

ClareDx
Автор

great tutorial! my example images have been coming out looking nothing like the pics i used, i used 23 pictures, and i tried it at, 800, 1600 and 2300 and all have not produced any results that look like the pictures

jerryjack
Автор

what do you mean by you use this to train on to start a base? do you train it further on something else after this?

olvaddeepfake
Автор

Hi, Does anyone know how to fix this error?

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
torchdata 0.6.1 requires torch==2.0.1, but you have torch 2.0.0+cu118 which is incompatible.
torchtext 0.15.2 requires torch==2.0.1, but you have torch 2.0.0+cu118 which is incompatible.
Successfully installed torch-2.0.0+cu118 torchaudio-2.0.1+cu118 torchvision-0.15.1+cu118
WARNING: The following packages were previously imported in this runtime:
[nvfuser, torch]
You must restart the runtime in order to use newly installed versions.

chiaowork
Автор

hi sir, I am from India and i was searching for this type of tutorials since long. thankgod i finnaly found your channel....do we have to charge for dreambooth?

prathameshmoree
Автор

Hi, I have a problem. When I click the play button, it says that I have a FetchError. What do I do?

bernadettpapis
Автор

There is something missing here imho. Where did the tags come from? Is SD adding these images in to it's premade models then? Sorry for wrong terminology here. I'm still trying to figure out the architecture behind SD.

blackkspot
Автор

People always use faces to demonstrate this process, but it'd work for anything right? Power Rangers, cactus plants, fish, buildings, etc?

MarkArandjus