ComfyUI - 1 Image LoRA?! Check out this IP Adapter Tutorial for Stable Diffusion

preview_player
Показать описание
An amazing new AI art tool for ComfyUI! This amazing node let's you use a single image like a LoRA without training! In this Comfy tutorial we will use it to combine multiple images as well as use controlnet to manage the results. It can merge in the contents of an image, or even multiple images, and combine them with your prompts. The IP-Adapter is very powerful node suite for image-to-image conditioning. Given a single reference image you can do variations augmented by text prompt, controlnets and masks.

This is quite a ground breaking new stable diffusion technology that I plan to use quite a bit, as it can add missing elements to a scene, or can adjust the overall colors of an image.
#stablediffusion #comfy #comfyui #aiart #ipadapter

You can read all out the methods behind this node here:

Download all of the code and models here:

Interested in the finished graph and in supporting the channel? Come on over to the dark side! :-)
Рекомендации по теме
Комментарии
Автор

Thanks for showing the actual process of setting all of this up, rather than just popping in the workflow! 😊

swannschilling
Автор

Wow, explained everything in easy to follow way. Thankyou for sharing.

yadav-r
Автор

This series has been great. Finally all caught up! Can't wait for more. Thanks!

DJBFilmz
Автор

You really nail it when it comes to explaining stuff – way better than others out there! Plus, your workflow node order makes it super easy to follow along and get a clear picture of how things go down step by step. Keep up the awesome and "Yes" I have the same OCD

hakandurgut
Автор

I’m currently making the transition from Invokeai because I wanted more control and these videos have been very informative! Ty!

Jubie
Автор

Thanks as always your content is always 10/10

HistoryIsAbsurd
Автор

wow really you explain just like pros, well you are one pro.

Xavi-Tenis
Автор

I kind of got stuck here... I managed to install the ip adapter models, but it wasn't very straightforward (and I may have overwritten the 1.5 image encoder folder with the XL image encoder one).. but then I got completely lost trying to get the CLIP vision stuff. I know it can be tedious for the more advanced users, but I wish you had briefly touched on installing these.

Renzsu
Автор

Great tutorial choom. Helped me a lot.

sweatington
Автор

So glad you put this workflow together, I was trying for something similar myself but this is more efficient. Getting great results and the experimentation has just begun.
One small thing for newbs the prerequisits are daunting I think, simple things you take for granted catch us out. Like saying you need a Clip model and expecting us to know where to find one and then where to put it. ☺ A link and a sentence like find it here and place it there would polish the information. Cheers

ukdcom
Автор

One Cool Way to use it is to control your LoRa Results Better, in essence you help your LoRa adapt to exactly what you really want to see instead of rolling the dice, especially considering you can control it with both controlNET AND prompting and even control how much it will consider either etc etc.

twindenis
Автор

Fascinating and inspiring! Thank you! In your videos on upscaling, you didn't get into the tiling parameters. I'd love to see a follow up video that covers them. Thanks again!

kevinmack
Автор

oh dar. I am getting an error about size when I run this. : Error occurred when executing IPAdapterApply:

Error(s) in loading state_dict for Resampler:
size mismatch for proj_in.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1664]).

Any idea what I have done wrong? thanks

MaryAnnEad
Автор

Hi I'd love to get the IPA apply module but it doesnt show up in the search. Has it been depracated. and if so what can I use instead please. Thankyou...great video

SteAtkins
Автор

I love this episode. I have already started using these image combining techniques learned here for a number of different purpose already. It is very powerful because you can use real or generated people with real or generated backgrounds. Pretty much allows you to place anyone anywhere.

henrygrantham
Автор

thanks for the video scott. If you could provide an update on how to get both the models for the "Load IP Adapter Model" node and the "Load CLIP Vision" node that would perfect 🙂

michaelbayes
Автор

That is a fantastic introduction to IP Adapter! Thank you for the ideas!

ahtoshkaa
Автор

Thanks for great videos, really like your bottom up approach for explaining things and keeping it simple. I have a question on this techique:

Can the following be done with this approach (just want to know it's possible before I dive into it):
Generate an image of a livingroom with a sofa -> masking the sofa -> Applying a control net to the sofa (to get it's shape) -> Take an external image of another sofa (perhaps from another angle) -> Using IP-adapter together with the ControlNet to inpaint the new sofa in another angle in the livingroom

Or have I misunderstood anything?

alexlindgren
Автор

helpfully thorough as usual. many thanks

ronnykhalil
Автор

You shared a really good tip in this. Thank you! Can’t wait to try it.

dulow