The Truth About Consistent Characters In Stable Diffusion

preview_player
Показать описание
The Truth About Consistent Characters In Stable Diffusion...It's not 100% possible without having to train lora's and dreambooth models and without having to do a convoluted process. However with controlnet reference, we can get very close to that. Also the roop extension will help us utilize real photos to expand on this method. Today's video I'll show you how to get to that point with out any training and then in a part 2 to come we will look at making improvements to hands and faces and some post production techniques to get close to that consistent character goal!

Random name generators

⏲Time Stamps
0:00 The truth about consistent characters in stable diffusion
0:13 Start with a good model and consistent faces
1:13 Create images and develop your look
1:58 Use ControlNet Reference
3:35 Same character different background
4:25 Using real photos and Roop extension
6:17 Experiment and create!

**Disclaimer Affiliate Links Below**

📸 Gear I use

🎵 Epidemic Sound

🔦 Find us on:
Рекомендации по теме
Комментарии
Автор

Just to clarify, my goal is not to have to train lora's or dreambooth models to achieve consistency, I'm well aware of that. The problem is that it's not accessible to everyone and difficult to do for most people. Do you have any tips for consistent characters?

MonzonMedia
Автор

Just finished the video. Will be experimenting before long with control net. Bookmarking this one for when I do. Glad you touched on what few people do which is that 100% perfection with stable diffusion is really not possible. The whole thing is being as close to possible with to the original which you touched on right at the beginning! Well done!

DasLooney
Автор

haha love how you point out not to notice the hands on your first gen and yet they're nearly perfect, something i pretty much never get my first time around.

hehe-k
Автор

Very nice way of explaining, simple yet detailed. I just started with ai generation, and regarding face consistency, i use after detailer with a lot of success, but usually only to the face. I’ll add controlnet to the workflow for hopefully more consistency in the clothing. The last challenge would be a consistent environment. If i describe a location, it will still give me a variety of backgrounds that don’t really match in consistency

emileklos
Автор

Love it. It's often saving me from having to train a LoRA.

Shabazza
Автор

Great video! This helps a lot. Also, I hadn't tried Roop before. That's returning some pretty good results for me. Thanks again!

thedanielblack
Автор

In the equation for consistent characters, I use variables like age and body type, that helps a lot.

jdesanti
Автор

This is a good place to start, Saved me a lot of time.

BrettArt-Channel
Автор

Hey Man... Great Tutorial.. I learned some new techniques.. 😎✌🏻
Thanks 💪🏻

Onur.Koeroglu
Автор

Wow, another great tutorial. Who would think that using non-existent names would be really helpful?
One of the many errors I and many others got with Roop install is that a component was deprecated with link to read some technical info. Not useful to those of us who need 1 click installs that you explained so well. In addition to Roop, there are other projects that do the same thing (FaceSwapLab, sd-webui-roop, Gourieff / sd-webui-reactor etc).

DrDaab
Автор

Mixing many random names will give you default average model face. Every model has one. It is affected by race and age, but it is there. If you want different face, I suggest mixing celebrities .. 2 are usually enough, give them weight 0.5 .. or do XYZ plod spread to find what you are looking for. Not only is the face consistent, you can also control facial features this way.

DrSid
Автор

Great video! You'll have to give Augie a try sometime :)

augiestudio
Автор

First off great video. I love your pace and explanation of your process. I have found great consistency in my models and images. However, I am finding a great deal of degradation on the quality of my images that I produce. Creating the initial reference image is clean and sharp but the images derived from ControlNet come out less than great. Is there something I'm missing? I have double checked my settings and even paused your video to compare. I'm using ControlNet v1.1.411 and SD 1.6 for my workflow.

WetPuppyDog
Автор

A hypothetical name just directs the seed.
It does not direct the seed any more than any other descriptive word would.
And therefor it is fairly meaningless to include a name, IMO. Maybe I'm missing something, or there's something I'm not fully understanding.
What you could do instead is save some very KEY descriptive words in a document and make sure to always use those 3-10 descriptive words along with your seed. The character should look the same every time unless you change up the Lora's you're using. Lora's cause your seed to be interpreted differently.

ExplorewithZac
Автор

Can you do a similar video of achieving great consistency including clothes but using Fooocus instead? What should I do in that case?

zoezerbrasilio
Автор

Can this also be done with FOOOCUS? If so what are the best base model refiner and lora to use ..

GhettoDragon_
Автор

Thank you always. I succeeded in changing my face through roop, is there a way to change my outfit and hairstyle naturally?

타오바오-hl
Автор

Hi MM! Could you please teach us a similar technique when we have two characters, in order to keeping consistency for both?

GeorgeLitvine
Автор

Hold on, this is like the ones you sent on the messenger group, right?

falsettones
Автор

Hi! The hands in your pictures were normal. How could you do that? Is it owing to the pre-trained model? I used other models and always get weird fingers.

syu