Dreambooth BROKEN?! || Quick-Fix with These Parameters (Stable Diffusion/RunDiffusion)

preview_player
Показать описание
In this episode, I will discuss in detail the Dreambooth BROKEN?! Quick-Fix with These Parameters (Stable Diffusion/RunDiffusion).
Dreambooth has been broken for some time! Change these parameters and it works again! Guaranteed to work - Dreambooth training in Stable Diffusion. An easy-to-follow step-by-step guide tutorial in Dreambooth Stable Diffusion on how to train your own model.

It's made on the Wacom Cintiq Pro 32, Macbook Pro M1 Max, RunDiffusion, Stable Diffusion, Atem Mini Pro, Adobe Premiere Pro, After Effects, Adobe Illustrator, and Screenflow.
-------------------------------
SUBSCRIBE
▶ You can subscribe to our channel here:
▶ RunDiffusion Promo Code: levendestreg15
-------------------------------
My setup
-------------------------------

Watch Our Top Videos:

✅ Midjourney 5.1 - MAGIC RECIPE for the best prompts!

✅ Midjourney - This will change how you write prompts! (SECRET RECIPE)

✅ Midjourney - comic characters + the SECRET of fixing hands and eyes!

✅ THIS IS CRAZY!!! The perfect poses made easy! Multi ControlNet, PoseX, ......

✅ This changes everything! Control colors, people, and poses - Multi ControlNet ......

✅ ChatGPT 4 - You're doing it WRONG. Unleash the power of AI and Midjourney 5!

00:44 - Upcoming video on how to train with Leonardo AI.
01:16 - So first I am going to spin up a server on RunDiffusion.
01:44 - Step two your data set - your input images.
02:00 - In RunDiffusion after your server is booted - The first thing you do is to write the user and password to log in.
02:24 - Create a new folder and write a name for that folder, DBFiles. And in that folder create an "Input" folder where you put your images.
02:44 - In the Dreambooth tab - click the button "Create a model".
03:20 - Step four Set the parameters in the settings in Dreambooth.
03:28 - I have 20 images, so I'm setting it to 100 epochs. You might want to try up to 150 epochs.
03:39 - Set the preview and frequencies to zero to speed up the training process. Right now I am using 25 and 10. It's set so that every 10th epoch it will show me an image of what it is training.
04:16 - Go to the concept tab where you write the path to where your input images are located.
04:00 - Mix position and the FP floating points 16 and X formers.
But recently my code has been breaking with those settings, so I am setting it to none
and default instead.
04:51 - I have zero class images (but you can set 10 there) and these are my settings here.
04:56 - In the saving tab name the model.
05:07 - I want to keep. file it's canceled and when the training completes, of course.
05:17 - Step five: train the model and wait for the iterations to finish.
05:39 - Sometimes the status bar will stop working in Dreambooth. If that happens just check in the log folder log file if the training is still preceding.
06:21 - You can tell that it's finished by checking the stable diffusion logs.
06:33 - Now step six: You need to test your model.
07:06 - Choose your new model in the dropdown menu.

#DreamboothBROKEN #LevendeStreg #Quickfixwiththeseparameters #StableDiffusion #RunDiffusion #Dreambooth #WacomCintiqPro32 #MacbookProM1Max #RunDiffusion #StableDiffusion #AtemMiniPro #AdobePremierePro #AfterEffects #AdobeIllustrator #Screenflow
Рекомендации по теме
Комментарии
Автор

Thanks for making such videos. But it is interesting that in regards to the concept settings that there is kind of a lack of more detailed information on how to handle data set descriptions and set it up properly. Yes i understand the basics, but i am experimenting with training for a while now and every time i thought i kind of got a step further to refine my training method i have a set back that questions my whole approach.
What i mean specifically is how to train certain details, like a pose or details like hands to improve the models capability. I had some success but i am still uncertain which is a good approach to that sort of effort. Also the training with class images confuses me even i searched google for hours and tried to understand how they actually influence training and how they should look like. I get better and much faster results not using them at all, but the downside is obvious, very easy to overtrain and butcher the model.

If you have more Info about that aspect please let me know. Or maybe a source i have not found yet?
Again, thanks for your video and have a nice day.
cheers

madrooky
Автор

Hi! could you please help? I am not able to figure out how to download the outputs on my pc locally.

randomprocess
Автор

Great video! Once you have created your trained model in Dreambooth, are you able to download it off of the Run Diffusion platform to use locally?

charltonho
Автор

Hi, thanks for the video and great content!, but what about training a style of an artist? I see often people training a person, just wondering how many file in the dataset and variety would be needed.

ShiftCtrlNas
Автор

Are those txt files that are seen in the source folder at all important for your model? Since there's no mention of them further, I assume not?

TheElBudo
Автор

Would you by happen to know how to train free locally in dreambooth on SDXL in windows if so a tut would be appreciated.

jessecool
Автор

Yeah it's curious how Loras interact with different models, also I wanted to ask about the source photos you choose, we try to keep it around 12 photos and avoid body shots, I would love to see a how you choose them.
Thanks for your amazing work ✊

camilovallejo
Автор

What's your take on dreambooth vs Loras? We've worked with both and have had sort of the same results, Loras just seem more convenient 🙂

camilovallejo
Автор

Ive been struggling with dB for months, intressting to see how well your model turned out.. Wish i had vram enough to turn off xformers.. Might follow your guide there.. I would also be intressted to know more of your data set..

alexgilseg
Автор

I find it very difficult. to create my realistic photos. It went wrong.

MrLaura
Автор

congrats for you ! For me i have 12 go v ram and always bad result ( diiferent model uses as protogen 3.4, sd 1.5 etc... ) dataset 10 images 20 images constant fp16 constant warm up bf16 0.1 or 0.2 rate cosntant linear 1 or 0.5 nothing give me good result very strange !!

hatuey