L6: Model Switch and Masking in ComfyUI - Comfy Academy

preview_player
Показать описание
Model Switching is one of my favorite tricks with AI. We render an AI image first in one model and then render it again with Image-to-Image in a different model. This allows us to use the colors, composition, and expressiveness of the first model but apply the style of the second model to our image.
Then we also explore Image Masking for inpainting in Comfyui, a hidden gem that is very effective.

Other Lesson Videos:

#### Join and Support me ####
Рекомендации по теме
Комментарии
Автор

Thanks Olivio!You are appreciated by the Ai Art Community!

imagineArtsLab
Автор

Amazing stuff, Olivio! Former A1111 dabbler going to ComfU! So thankful for your tutorials.

joeyc
Автор

Thank you, this has been so helpful. ComfyUI is very user friendly once you get past the learning curve (which you have shown me), and you have absolutely made that curve so much shorter. On to L7!!!

XuRZaL
Автор

Thank you so much for taking the time to make this series.. It was an excellent intro to the Comfy UI node system.

BingsBuddery
Автор

hey Olivio, today I decided to do all your workflows from your ComfyUI academy. I know comfyUI well, but there were some voids in my knowledge.. Soo, now the further courses are waiting for me.. :D Thank you a lot!

Onur.Koeroglu
Автор

Thank you for your videos. They're very helpful and everything is explained quite well. Easy to follow along.

Pixelmound
Автор

Woahhh Masking is so cool!!! Still need to practice model switching more. i feel confident that your academy has everything i need to understand the fundamentals of Ai.

CoreyMcKinneyJr
Автор

why did we have to use the skip clip layer? -2 is one from the end, just wondering why we don't include the end layer for the initial revAnimated image?

madhudson
Автор

Wonderful lesson as usual. Hey, I heard recently that to get the best results we must use the exact same VAE used to create whatever Base model we use. Question: Does the VAE "baked" in to the Checkpoint model provide the same functionality as the separate VAE models I see you loading here and in your other videos? I'm guessing it doesn't, or you wouldn't be doing it that way, but, interested to know the benefit of separate VAE's. I just started working with AI a couple weeks ago and am so glad I found your tutorials! Thanks so much Olivio.

bobhann
Автор

Thank you it was indeed beginner friendly! I did some comparison recently of tools accesible via hugging face.

marcinkrupinski
Автор

Awesome tutorial especially for newbie! I can't thank you enough. 🤩But I don't understand why you need to convert the positive text to input and make a CR Prompt Text? I created a separate workflow without the CR Prompt Text, just use the same Positive Clip Text Code and link to 2 different Ksampler, I am able to achieve the same result. Appreciate anyone here can enlighten me 🙏

ethanhorizon
Автор

Awesome tutorial, thanks! How would you do this if you just wanted to use an input image instead of creating the original rev animated image. Is that basically just the same latent image workflow from the previous tutorial?

bennyboo
Автор

This was really helpful! I have two questions regarding this: 1) Why does the first model need a clip skip of -2? What is that doing? 2) Would it be possible to start with a SD1.5 model (e.g. rev animated) and then model switch to an SDXL model? If I missed the explanation to either question in a previous question my apologies!

CalenCoates
Автор

Thanks for your lesson, it helps a lot, thanks !

majinvegeta
Автор

I don't quite understand why you would use CR prompt text instead of normal text incode for the positive nodes

WeekendDive
Автор

What is the purpose of the clip skip? Is that only for certain models?

scottmahony
Автор

I fell in love with the cover art, I'm wondering if there's a possibilty that you can maybe share the image. The elf girl looks so cute 🧝🏻‍♀💚

Heraplayswith
Автор

this also can be done with controlnet?

ovideotube
Автор

about this way to use Set Latent Noise Mask, what's the different with VAEEncodeForInpaint node?

hongtian
Автор

This workflow is useful for a base model+refiner workflow?

georgeneverland