Easy Image to Video with AnimateDiff (in ComfyUI) #stablediffusion #comfyui #animatediff

preview_player
Показать описание
Easily add some life to pictures and images with this Tutorial. The Magic trio: AnimateDiff, IP Adapter and ControlNet. Explore the use of CN Tile and Sparse Control Scriblle, using AnimateLCM for fast generation.

#animatediff #comfyui #stablediffusion
============================================================

☁️ Starting in ComfyUI? Run it on the cloud without installation, very easy! ☁️

============================================================
CREDITS
===========================================================

🎵Music

✂️ Edited with Canva, and ClipChamp. I record the stuff in powerpoint.
========================================================
© 2024 Koala Nation
#comfyui #animatediff #stablediffusion
Рекомендации по теме
Комментарии
Автор

Error occurred when executing KSampler:

'NoneType' object has no attribute 'size'

art-hub-adults
Автор

thank you for your video, that's very helpful

YING
Автор

where do i save teh animate lcm model to?

kizentheslayer
Автор

hello!
Does SparseControl work with AnimateDiff LCM properly? not V3?

hamster_poodle
Автор

Great video! I'm new to all that and im wondering of there is a way to keep the details. I'm trying to use a city skyline as img to video, and there for example, a lot of windows are getting removed.

SiverStrkeO
Автор

I'm getting an error with IPAdapterUnifiedLoader, says clipvision model not found. I've downloaded a few versions and put them in my clip_vision folder but still getting the error. Is there a specific one for this node?

MarcusBankz
Автор

very easy tutorial only took me HOURS to do it, I'm curious how to make people walk or move with comfyui.

boo
Автор

Thans for the tutorial! Question: Is it possible to feed Comfy with a reference video for it to animate the image using said video as reference? Like, say I have an image of a character, and I give Comfy a video of someone skateboarding, is there a method with which I could get comfy to animate the character skateboarding based on the video? Cheers and thanks in advance!

estebanmoraga
Автор

hey i'm getting this error "Could not allocate tensor with 828375040 bytes. There is not enough GPU video memory available!" I have an AMD Rx6800Xt 16gb vram, any workaround or fix? Thanks

vl
Автор

when I click Queue Prompt is says "TypeError: this is undefined" and nothing happen. I have all required nodes/models, and comfyui updated/restarted. can you please help?

jaydenvincent
Автор

Can you set up the whole thing for us to use it?

VanessaSmith-Vain
Автор

Love your videos so much! Can you make a tutorial video on FlexClip’s AI tools? Really looking forward to that!

Cyrine
Автор

It all worked and animates the image but every time it comes out very bright and faded. Any suggestion on how to fix it?

frankliamanass
Автор

I was wondering how did you bring node number on the box ?

elifmiami
Автор

hey buddy, how did u copy second Ksampler with all lines connected duplicated ? at time line 4:40

joonienyc
Автор

I couldn't make it work :(
I get this error every time:
Error occurred when executing ADE_ApplyAnimateDiffModel:
'MotionModelPatcher' object has no attribute 'model_keys'

HOTCDRN
Автор

Geeez this takes long to run. which gpu do you have?
amazing tutorial !!!

bordignonjunior
Автор

Yeah, that was really easy, piece of a cake 🤣

VanessaSmith-Vain
Автор

great video you skipped some steps but still detailed. Question, do we not need to change the text prompt for each randomized pic > Also why did you use Load video path node for an image ?

cbjxogk