AnimateDiff CLI prompt travel: IPAdapters, LoRAs, and Embeddings

preview_player
Показать описание
This video is a quick overview of adding IPAdapters and LoRAs into your CLI workflow. Although the tutorial is for windows, I have tested on Linux and it works just fine. Just ensure to adjust any paths and commands to the linux equivalent.
The repo is located here:

The prompt template file can be copied from here:

The embeddings used for this video:

The motion_modules can be found on the main AnimateDiff repo where you will be offered different sources to download them from:

The model used in this video was downloaded from CivitAI:

The specific one used was the epicrealism_naturalSinRC1VAE.safetensors model.
Рекомендации по теме
Комментарии
Автор

This tutorial is Gold.. thanks for putting up such a detail workflow... such good content is hard to come by

kenrock
Автор

Dude! You content is so freaking good! What do you do for a living? You know more than anybody ive watched!

amorgan
Автор

Thank you so much. I really needed to figure out how to add embeddings for the easy negatives and here you are. Please keep doing what you do

simonzapata
Автор

Thank you for your videos! They're greatly appreciated!

gpr
Автор

Yo - You are really good at this - Keep them coming.

BusyAi
Автор

Hey, thank you very much for these tutorials, been using SD for animations since inception, and I knew this day would come for animations natively inside SD

a.dejavu
Автор

Super cool, thank you! In upcoming tutorials, if you touch on this subject once again, would love to see a comparison with IPadapter on vs. off, same prompts otherwise, to get a better understanding on what essentially on/off does.

never_ever_never_land
Автор

You're killing these tuts man! Great work - thank you sincerely. 🔥 I'm finding that trained LoRAs on people aren't working that well - but it's still early days :)

pixeladdikt
Автор

I have a quick question. We could give a video as an input in automatic 1111 webui. And as you show in the video we can use ip adapter to add images as an input. So can we use ip adapter for giving video inputs or is there another way for this process ?

ataberkseker
Автор

instead of euler a can we use a different sampling method? DPM++ 2M SDE Karras preferred edit. nvm found my answer but don't know if theres a list of scheduers supported

anon--inii
Автор

Man, another amazing tutorial. Thank you for this. In my animation, there are two women instead of one. I rendered it in landscape format so that might be the culprit. Do you know of any (negative) prompt that forces the Latent Space to generate just one person instead of multiple? Cheers!

starhaven
Автор

Thanks for this, I'm able to output an mp4 but only for 6 seconds. There are only 48 generated photos. What seems to be the issue here?
Edit: Solved it. Parameters were set to -L 48 instead of -L 128. lol. Do yo have a guide for making promp_map for like a scenery and not with a character? or does AnimateDiff do sceneries?

a.baluya
Автор

Hi, is it possible to use video for controlnet using this method ( not cmfy or 1111)

Tyrell_ai
Автор

i see that it is looking for a directory and you just put in one image in there, what happens if i put multiple different ones? and also i am not sure how can you be sure it is actually using the image and not just the prompts like in the last video because tbh it's mostly the same?

ywueeee
Автор

I'm pretty new to SD. I copied the exact model, embeddings and prompt from the video, for the first reference image generation, but I keep getting very bad faces 90% of the time. Is there something I might be doing wrong?

johanrr
Автор

very nice tutorial, how can we use video as a ref movement in animatediff. thanks in advance

baseerfarooqui
Автор

can you explain a bit more how the ip_adapter works?

christopherzou
Автор

Sick work!
But does there anything that can give same consistent and smooth result but for img2img?
Control net still gives flickering

NoxyCoxy
Автор

when i try and run your json file it gets stuck right after.. "Will save outputs to".. Anything i am missing?

digitalflick
Автор

why are you not using webUI extencion?

impacman