Stable diffusion tutorial - How to use Two or Three LoRA models in one image without in-paint

preview_player
Показать описание
#stablediffusion #stablediffusiontutorial #stablediffusionai

🚨 Attention! 🚨
The background color for the mask is wrong in the video. 🎭🎥
Kindly use black color as the background color for the mask. ⚫️

See this post:

📷 Pnginfo used in the video 🔍

☕️ Please consider supporting me in Patreon 🍻

👩‍🦰 LoRA model 👇🏻

🌐 sd-webui-additional-networks ✨

Utilizing two to three Lora models simultaneously in a single image, without in-paint techniques, can be a daunting task for those interested in stable diffusion and AI art.

But fear not! If you're passionate about stable diffusion and advanced Lora training in the realm of AI art, we have the solution you've been looking for.

In this YouTube video, we'll reveal the secrets of effectively integrating two to three Lora models into one image, without relying on in-paint methods, to unlock the true potential of your AI art.

So, if you're eager to take your stable diffusion skills and AI artistry to the next level with multiple Lora models, stay tuned and let's embark on this exciting journey together!
Рекомендации по теме
Комментарии
Автор

🚨 Attention! 🚨

The background color for the mask is wrong in the video. 🎭🎥

Kindly use black color as the background color for the mask. ⚫ #VideoCorrection #TutorialUpdate #MaskingMatters 🎨🔧

See this post:

life-is-boring-so-programming
Автор

Thanks, I kind of gave up on Latent couple and Compostable LORA, they just were underperforming. This method has ZERO mask bleeding even if they are touching, works wonders!

Drone
Автор

It took me 6 MONTHS to finally find you! You a an angel send from GOD HIMSELF( New Subscriber)

butonphillie
Автор

Thanks for giving us this method guide.

I've tried two different methods (Yours and regional prompts) which both supports 2 or more different LoRa in one pic.

When using regional prompts, the generation speed is slow, and I need to keep experimenting to achieve the best results. However, LoRa doesn't consistently adhere to the specified regions for depicting characters. For example, if I use a 1:1:1 ratio to generate an image with an object + LoRa character A + LoRa character B, the result I obtain could be object + LoRa character A + LoRa character A, or it might be object + LoRa character B + LoRa character B. The composition ratios might not even match the intended 1:1:1 ratio.

Here's my testing prompt using regional prompts:
realistic, photo, 2 persons walking on the street, side by side, ADDCOMM
street stalls, ADDCOL
model_suesy, pink suit, lora:model_suesy:0.5:1, ADDCOL
model_dennis, short hair, white suit, lora:model_dennis:0.5:1

It seems that TE weight isn't recognized by the regional prompt (only the Unet weight works well, and the results are not consistent).

On the other hand, with this method, most manual operations don't appear to have a waiting issue. And that's amazing.

However, I'm unsure why this method usually generates multiple heads and extra figures in the masked area when using my self-trained LoRa, although it works fine with regional prompts achieving about 70% accuracy. My LoRa consistently tends to fill the mask and alter the background, even when I use a seed. Each time I modify the parameters, the image changes a bit regardless of its location. (Holding the seed will not help in this case and might even lead to misshapen limbs or fingers in some instances.)

And here is the test promt( I'm using trigger words for my LoRa character) using this method:

photo, masterpiece, model_suesy, model_dennis, sitting, smile, looking at viewer, european, canteen,

I have tested a whole afternoon and find the accuracy nearly 10% using same self-trained LoRa.

It "kinda" works, but I think regional prompts will do better for me.

jnsryzf
Автор

that tutorial was blowing! amazing.please keep doing more!

shlomitgueta
Автор

Usually I play with the weight when I use Loras. No more thanTwo. But I like this new method. This will help me a lot :). Now I will be able to use more then two Loras. Thanks!

Ultimum
Автор

thank, it works, and i learn a lot
now needed more time to make the result look good

MrXXX
Автор

Really liked the video. I will definetly try this in near future while starting to create storyboard for our story😊

tomimartikainen
Автор

You are straight to the point, thank you so much

butonphillie
Автор

Thanks for sharing this knowledge, subscribed. 🤗

NirdeshakRao
Автор

I can’t wait to try this. Have this video saved for watch later. I can’t wait!

SantoValentino
Автор

I have to say this is great Loving it.

luozhan
Автор

This is what I want. Thank you master 🙏

ariftagunawan
Автор

Wow that is pretty awesome and you explained everything so clear with good examples. Thank you.

hardstyleminded
Автор

you can make a symbolic link to use the existant Lora models instead of duplicate them in the folder of the extension.
Example :
in a command prompt :
mklink /J

frx
Автор

In my case Addittional network tab is not send model to txt to img window's addittional network tab

misterenigma
Автор

Thanks a lot for this upload..was wondering if you can make a tutorial on sd-cn animation as well? ❤

ai.ai.captain
Автор

I decided to try out Latent Couple and Composable LoRA and with a bit of tweaking it works well, much simpler than this mess. So I guess it's just a skill issue/not learning how to actually use the extensions.

sevret
Автор

the result wasn't great, but it can definitely be improved with inpainting. thanks!

RikkTheGaijin
Автор

that ai generated voice is still not there yet...

advaitbhore