NEW! ControlNet 1.1 No Prompt Inpainting.

preview_player
Показать описание
Let's look at the smart features of ControlNet 1.1 inpainting in Stable diffusion.

FREE Prompt styles here:

Рекомендации по теме
Комментарии
Автор

Fantastic, controlnet really has made stable diffusion glorious! Thanks for the video seb!

RuneWlf
Автор

As far as I understand, the point of the inpainting ControlNet is that it works with non-inpainting models. You can take any model and inpaint more efficiently without having to retrain or do a hacky inpaint model merge.

nio
Автор

The portraits were bad examples. If all you have is a blurry background, you'll get more blurry background. It needs something to work with. Your last example is the best because the masked area actually had some content to work with and the opposite side of the image also had something for the AI to utilize instead of just more blurry nothingness that the portraits had.

Repopper
Автор

I'm sure there's a place for this, but I honestly don't see this producing results better than what you'd get with normal inpainting when used with a good prompt. Moreover, the addition of the Photopea extension now means we have precise masking control as well as photo bashing when doing inpainting, which is such an amazing combination.

SteveWarner
Автор

It's not the most polished stuff like some people said in the comments but im glad they are working on something like this, the less we need adobe the better, also it can become great feature in the future.

Thank you for bringing us news from ai world 😊

Also i know its not Ai related but i would like to know your opinion about affinity if their stuff is good or not or if you prefer them over adobe 😁

kpwkpw
Автор

Thank you! at @1:49 your mask excluded the reflection in the water, which is what caused the oddnesses in the images generated

CBikeLondon
Автор

Seems useful for fixing small details with surrounding context.

vdinh
Автор

I had assumed inpainting did this already but now can see why I struggled with it sometimes.

jibcot
Автор

wow i try to use, thank you for example👍

Andommard
Автор

Nice update -- how would you approach outpainting with this?

sazarod
Автор

The results were... okish? I didn't see anything that justifies that control net "Inpainting Just got Better!" as the title says.... mmmm

zvit
Автор

This is kewl stuff. Your videos are always a firehose of information. Others make vids that are long and drawn out and don't get the the point. Yours I make use of pause and enlarge the screen to see what's going on and follow along.

emmettbrown
Автор

Well, I don't see any improvements in controlnet inpaint versus regular inpaint. What's the point in that? And by the way Deliberate model also has inpainting version which is very good.

ivansmirnov
Автор

Came for your guides and updates about stable diffusion, stayed for your Dad jokes/puns 😂

candyboi
Автор

8:35 Here's a woman with 6 fingers. Should we inpaint the hand? Nah, that's too hard. Let's remove this tree....

nathanbanks
Автор

very good results with the protogen models

hatuey
Автор

Hello, thanks for the interesting video, I have a couple questions about it:
1 Did you compare the inpaint model controlNet and the Deliberate inpanit model? If you did, which one is preferable to use?
2 Do I understand correctly that the inpaint model controlNet is created for models that don't have their own inpaint version?

Artazar
Автор

Thank you very helpful, is there any tutorial to upload mask image, instead of drawing it?

boraturkoglu
Автор

Thank you so much for your tutorials, but i can't find anything online that helps with batch img2img with inpainting for video, how do i get the mask dynamic for a png sequence ? That would be so infinitely helpful. Thanks so much in advance

scrydedoria
Автор

there's another cool feature you forgot to show off. By playing around with the resolution, you can also outpaint with this!

theresalwaysanotherway