ComfyUI simple Inpainting image to image #comfyui #stablediffusion #inpainting

preview_player
Показать описание
ComfyUI simple Inpainting workflow
using latent noise mask to change specific areas of the image

#comfyui #stablediffusion #inpainting #img2img

workflow:
Рекомендации по теме
Комментарии
Автор

Thanks for providing the workflow! Massive Like

claudioestevez
Автор

Having experimented with various ways of using inpainting in Comfyui I have to admit this tutorial was the better one with great results, I'm only new to this platform and I do struggle with custom nodes. Glad to have found this channel though.

robbiepacheco
Автор

wow, finally someone willing to share their knowledge. very rare. thank you sir you earned a new subscriber

astafzciba
Автор

At 1:00 maybe change "VAE Encode" to "VAE Encode (for inpainting)" and ALSO attach the mask output from the "Load Image" to the "VAE Encode (for inpainting)"
You will now have 2 wires coming out of Load Image "MASK".
Otherwise it may not make changes or make almost no changes, as in my situation.
The problem with this though, is that if I do less than 1.00 it sees the mask as being gray and I get a gray mouth or gray whatever.

FusionDeveloper
Автор

Thanks for the tutorial! I had no idea I could inpaint in ComfyUI! Does this work with highres pics/Pony XL? I just wanted to adjust a character's face but I keep getting something that looks lowres... Kind of reminds me of what happens when you inpaint on a1111 without selecting "only masked". Idunno....

AmandaFessler
Автор

This is a very easy to understand guide. Thank you. But to make it really easy and repeatable you had to add models in VAE and checkpoints because 1st run of your workflow shows a lot of errors. And for some reason 1st run made some plate out of nowhere which is pretty much confused me.

jbnrusnya_should_be_punished
Автор

Great video. @pixeleasel Is it possible to automatically the load image in the second workflow to automatically pick up the first image generated/changed without copying and pasting clipspace like in your video?

I want the OUTPUT of the FIRST flow to become the INPUT of the SECOND flow.

(excuse the caps, it's there to make it easier to read/understand) :)

pneydny
Автор

This isn't working for me. :( I think I've set everything up exactly as you have, but I cannot find "pytorch_lora_weights_SD.safetensors", I can only find Does that even matter? I'm not getting an error message, It's just not changing the image at all. :( I have the denoise set to a higher value, also - still nothing. :(

RuinDweller
Автор

Is it possible to run the second KSampler node only for the img2img part and not the first one. This workflow is not practical if the first KSampler node generates multiple images. I just recently started ComfyUI and been using A1111 web ui.

ghostsupreme
Автор

Can you add a lora to the bottom workflow or does it interfere with the inpaint

amorgan
Автор

What if I just want to use a previous generated image? Just load it and inpaint directly, I dont want to generate a new image

Gabey
Автор

0:54 How did you load the Output into the "loadpicture node" so fast

backhandsmack
Автор

Plzz make a video about "How to install comfy ui With "chackpoint & lora"

xfwrmle
Автор

how to get rid of this error?
Error occurred when executing VAEEncode:
Could not allocate tensor with 2147483648 bytes. There is not enough GPU video memory available!

jbnrusnya_should_be_punished
Автор

How do I download your load checkpoint?

damned
Автор

I got bad results, maybe try to update the guide

-dimar-
Автор

Really good video. Unfortunate that your AI voice is so bad.

ThoughtFission
Автор

getting "- Value not in list: lora_name: not in []" error is there a link to aquire this currently useing picx model

-Kurokawa-