L4: Img2Img Painting in ComfyUI - Comfy Academy

preview_player
Показать описание
Using a very basic painting as a Image Input can be extremely effective to get amazing results. In this Lesson of the Comfy Academy we will look at one of my favorite tricks to get much better AI Images. This gives you control over the color, the composition and the artful expressiveness of your AI Art

Other Lessons:

#### Join and Support me ####
Комментарии
Автор

I can not tell you how much I appreciate your instruction, Olivio.

peterplantec
Автор

Absolutely loved this part on latent space. Thank you so much!

GiPa
Автор

i have no words to describe my gratitude. Definetly going to buy you some coffe

p_p
Автор

Outstanding tutorials! Easy to understand and follow. :-) I hope to see more of them :-)

JBereza
Автор

I want the next lesson already!! thanks, Olivio!!

IsJonBP
Автор

Thank you, this has been so helpful. ComfyUI is very user friendly once you get past the learning curve (which you have shown me), and you have absolutely made that curve so much shorter. On to L5!!!

XuRZaL
Автор

Thanks for these videos, you explain them well. I'm fairly new to Comfy (coming from A1111) and primarily want to run my own digital art through AI, so these tutorials are helpful. :)

joywritr
Автор

Catching up slowly hehe.
Amazing lesson and great instructions.
Lets go!!!

joskun
Автор

Thank you so much for the detailed video! it helps me a lot! :)

bunnymeng
Автор

I see why people prefer Comfy UI, you have much more control.

vpakarinen
Автор

thanks this is still very useful! it works with HiDream.

rifz
Автор

I love you and your tutorials, so much man! 💛👏🏻

구원자님
Автор

this is pretty much what pushed me to switch to comfy ui, thank you! trying to get latent couple & composable lora masking to work correctly in a1111 or forge was driving me nuts 😂

tzgaming
Автор

What would be the best way to create a sketch image like you using it? Peraps even directly within ComfyUI? Thanks for you video!

DaBonQ
Автор

👍 Following these. Wanting to get into img2vid, and most use Comfy... So I gotta learn that part.

Question:
How does she get the toga on
over the wings??

;)

unlistedvector
Автор

Thank you so much Olivio! It is the first time I understand how img2img works.
Loaded images should have the size that we want to generate?(e.g 512x768)

aliyilmaz
Автор

I use a Lora trained on my own face. It seems that the prompt has to be almost the same as the example prompt, provided by the Lora generator. To have the recognizable face.

When I use control-net, the face becomes different, and doesn’t look like me anymore

Is there a solution for this?

FuZZbaLLbee
Автор

@OlivioSarikas After looking in all the nodes, I was not able to find LatentBlend. I do see that you have it in your example workflow. Would it be under Post Processing?

whodat
Автор

Cool. I found that if you drop your output image back into the image loader and render again, adjusting denoise, you can get some really great results and repeating that process over and over you can get some very interesting results. :O)

SumNumber
Автор

Could you put 2 images, 1 for the character and 1 for the background, then blend them to make the final image?

claraalmeida
visit shbcf.ru