ControlNet Reference-Preprocessor Comparison, Settings, Troubleshooting, and Style Transfer Workflow

preview_player
Показать описание
In today’s video, I will teach you everything you need to know about using the ControlNet Reference Preprocessors for Stable Diffusion in the Automatic1111 Web GUI.

Starting out simple, I show you very basic usage of the tool then quickly move onto a comparison of the three Reference Preprocessors to identify what each does and doesn’t do.

Next, I get technical to identify which settings like Style Fidelity are key for using ControlNet Reference. To round things out, I provide solutions to common problems that can occur with ControlNet Reference so you can maintain image quality while pushing it to the limit.

Finally, I show some examples for workflows for Style Change and for creating variations of an initial image.

Intro - 00:00
Preprocessor Explanation and Comparison - 01:45
ADAIN - 02:00
Reference Only and ADAIN+ATTN - 04:20
Preprocessor Review and Pros and Cons - 05:46
Settings for ControlNet Reference - 06:25
Ending Control Step - 07:15
Style Fidelity - 07:40
Weight - 09:02
Troubleshooting Blurriness- 09:50
Troubleshooting Excessive Brightness or Darkness - 10:41
Workflows, the law of equivalent exchange - 12:49
Workflows, model selection - 13:39
Workflow, Style Change - 14:19
Workflow, Image Variation - 16:39
Рекомендации по теме
Комментарии
Автор

Finally a channel that is actually teaching things intermediate users need to know. I'm so sick of every video showing how to install SD/Controlnet at the start for padding and then not much else of substance.

CheckTheWiki
Автор

As Always thanks for providing this knowledge for free to the small "deep divers" audience.
You work save us precious Time on question we didn't dare to research.
I hope you keep on experimenting and stay motivated.

lefourbe
Автор

Yay! A channel providing high quality SD content. Subscribing!

msampsond
Автор

Your style reminds me a lot of the "Battle(non)sense" yt channel, which is great. Comprehensive and to the point. I love your content

jamarti
Автор

The community misses you bro, your vids are still in a league of their own!

BabylonBaller
Автор

I love this channel. I use comfy UI but everything here I can translate to that. Thank you for these very informative videos.

patricklearn
Автор

an incredible amount of work i should say ! pure madness to pull out in only two weeks !

lefourbe
Автор

Thank you so much for all the contents you have provided! I never thought about using hires fix without upscaling!

RieslinZ
Автор

thank youu! I've been struggling with this tool and needed in-depth overview

lioncrud
Автор

Just found your channel - Great stuff!

mrti
Автор

you are the boss! (I'd love to see your take on outpainting:1.5)

DanailDichev
Автор

best episode yet, very useful info, thanks ! Darksushi is a borked model thats overtrained on anime, you cant do photographs with it which is probably the reason for overbaking

krystiankrysti
Автор

re: your section on excessive brightness or darkness. My experience is that this tends to happen when controlnet is "fighting" against what the model/prompt would usually want to produce. i.e. both sides are pulling in different directions.

I've tended to solve this by improving the prompt so that the model's "standard" output more closely resembles the controlnet image, or by simplifying the prompt to give more flexibility and remove the conflict of model vs controlnet. Your solution also works of course; weakening the "strength" of either side of the fight lets one of the sides win more easily.

Aertuun
Автор

I use reference to light my images. I found that using best dynamic range images gives best results.

danishraza
Автор

Thanks for the tutorial. Short, but every second of the video is knowledge. I took a lot of notes from this 1 video.

I would like to ask you why you think that highres fix is necessary for text to img? I haven't used SD enough to make a call. But from a professional designer's point of view, also using my not too cheap PC to try, adding this step does significantly increase the render time. So I incline to think that finetuning can be moved to img to img. But again, I am just a baby in SD. What do I know. Please explain a bit if possible. thank you. liked and subbed.

xyz plot is amazing. 1 shot and we get to see all of them. Perhaps this is why you suggest using Hires fix everytime. Only issue is, I can totally feel the difference in terms of the wait.

ekot
Автор

10:00 for this part we can fix stuff with Adjusting prompt like you said and maybe adjusting other stuff like VAE, Lora for darken or lighten the image without destroying the composition and some CFG rescaling like "Neutral prompt" extension, There is some setting in aswell similar to this but from the webUI dev they said this one is a bit better (I'm forgot where in the setting tab as well. xd)

18:47 maybe we can use ctrlnet segmentation here. idk

great content as always. ty and have a nice day. (I'm done rewatch this and maybe ill watch again if I forgot some stuff as most likely would happen in the future. xd)

MrSongib
Автор

I've been trying to do Tiled Diffusion with the multi diffusion upscaler extension. And all I ever get is multiple images forced together ungracefully. A video on this extension and the tile controlnet model would be awesome. And your awesome too thanks for the videos 😊

philiproberts
Автор

do you know if theres a way to use reference-only without using a prompt? i have an image and i want it to be replicated, but i don't want to prompt it

aneilatbimi
Автор

cool content but hard to follow, the editing jumping around as always and it's kinda hard to focus on what you talking about and what I actually seeing atm. idk how to explain it. i think I need to rewatch this later. ty and have a nice day.

MrSongib
Автор

Joke's on you - I _want_ human-animal abominations.

Lishtenbird
join shbcf.ru