New IP Adapter Model for Image Composition in Stable Diffusion!

preview_player
Показать описание
The new IP Composition Adapter model is a great companion to any Stable Diffusion workflow. Just provide a single image, and the power of artificial intelligence will analyse the very composition itself - ready for you use!

Check out some of things you can do with it :)

Want to support the channel?

Links:

== More Stable Diffusion Stuff! ==
Рекомендации по теме
Комментарии
Автор

Thanks for sharing! I've been messing with Ip adapter all week, it's so much fun!

ClownCar
Автор

Have you managed to use this composition with existing images (e.g. your Nerdy Rodent), to give it more depth composition?

SejalDatta-lu
Автор

I wish I had found your channel earlier😢🤯❤❤🔥

farsi_vibes_edit
Автор

Hey Nerdy Rodent, thanks for the tutorial. Do you know if this can apply together with a pose control net? I want to design a character from different views, (front, back, profile) and maybe transfer style or Lora character for consistency. Any tips?

Niffelheim
Автор

Nerdy's contents are amazing. Are you a mind reader? 😁

godpunisher
Автор

I like the thumbnail for this image.
I wonder if you can create an A. I. for generating similar images, compositing text (with effects) as such.

dudufridak
Автор

Negative Prompt: "Bad Stuff Such as Evil Kittens" ROFL!

BabylonBaller
Автор

What I really want to see is a Lucid Sonic Dreams working update or something similar that's user friendly. Any idea of anything in the works similar to that or how to even achieve a similar effects using something else?

holysabre
Автор

I saw the rodent in the sky!!!! I have the witnesses!
🤘😉

kariannecrysler
Автор

Is there 1.5 models that this doesn't work with? I keep getting 'header too large' error and that usually happens with model mismatch, but I'm using the 1.5 adapter. ?

Jcs-rryt
Автор

has anyone managed to get this working with pony checkpoint? it works with other models derived from sdxl like animagine and jugg/realvis but not pony for some reason, curious if its just me.

DemShion
Автор

Can you make a video on all your favorites AI tools and ComfyUI workflows ?

Like Google's Film Interpolation, StableDiffusion, RVC Webui, MusicGen, etc

MarcSpctr
Автор

What preprocessor do you use when using this with Automatic1111?

KDawg
Автор

I installed it in Forge and it ruined my installation. Now it generates only deformed images and randoms. I tried everything and I couldn't fix it, I will have to reinstall.

ramn_
Автор

that tiger needs help and I think we should act on it.

wakegary
Автор

Mate can you show how it's done in automatic 1111, forge please ?

Hooooodad
Автор

I can't seem to get it to work for Auto1111, Like it works but the image comes out very painted/pastel/distorted. The same thing happened to me on comfyui.. till I downloaded the 2 encoders; and CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors, and added them to the \ComfyUI\models\clip_vision folder, then it worked. So I thought, maybe that's the issue with the auto1111 version? However I can't find where to put these 2 encoder files for auto1111, I tried folder but that didn't work. I've also had issues just getting it to drop down for me in the menu on the gui, the ip composition model, like when I click on the ip adapter in the control dropdown, it has ip adaptorplus etc but no composition one unless I click the refresh next to the model dropdown then I can select ALL the models (even the ones not for ipadapter) THEN I'm able to load it, but like I said, it's all foggy/blurry when I make the image. I have controlnet v1.1.441, and my auto1111 is version: v1.6.0. I'm not sure what else to do. EDIT; I just updated my auto to version: v1.8.0 still having issues.

ForeverNot-wvsz