ComfyUI Video to Video Animation with Animatediff LCM Lora & LCM Sampler

preview_player
Показать описание
Learn how to apply the AnimateLCM Lora process, along with a video-to-video technique using the LCM Sampler in comfyUI, to quickly and efficiently create visually pleasing animations from videos.
-----------------------------------------------------------------------------------------------------------------------------------------

Useful Videos:
------------------------------------------------------------------------------------------------------------------------------------------

-----------------------------------------------------------------------------------------------------------------------------------------

---------------------------------------------------------------------------------------------------------------------------------------------

#stablediffusion #comfyui #animatediff #controlnet #videotovideo #lcm
Рекомендации по теме
Комментарии
Автор

Thanks so much for the video. You have been a huge help as I transition from A1111 to ComfyUI. keep up the great work and I hope your channel blows up

informationsociety
Автор

I'm so much looking forward to try this out once I have some time! Thank you very much!!!

NERDDISCO
Автор

Great tutorial Goshnii .. can't wait to try.

bonsai-effect
Автор

Great video!
A few quick questions.
1. Can you show an instance of image to video using the lcm method? Image of a person, copying the movement of a video. Think depose etc.
2. How would you treat a situation where you have a person in a video clip, but when translated to dwpose, some of the movement is cut off screen?
3. Do you have a lcm video that you've upscale to keep the quality and fix and deformed faces?

You've earned a loyal subscriber my friend!

pneydny
Автор

Thant you very much for your videos!!)))

kattarsisss
Автор

your last video was great thankyou for workflow help since I dont know what im doing. i just started watching this vid so hopefully its fire too

swoodc
Автор

Works very well! Thank you! Any method to get rid of the flicker/morphing?

ValleStutz
Автор

great tutorial :) I was wondering though what parameter is influencing how close the output still resembles the original video? Is it the cfg?

SanderBos_art
Автор

Thank you very much for your sharing.
I met a problem "DepthAnythingPreprocessor" red, I use the Manager “Install Missing Custom nodes ” node,
but, Display error "File "D:\AI\ComfyUI_Full\ComfyUI\custom_nodes\ComfyUI-Inference-Core-Nodes-main\__init__.py", line 1, in < module>
from inference_core_nodes import NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS
ModuleNotFoundError: No module named 'inference_core_nodes'

Cannot import module for custom nodes: No module named 'inference_core_nodes' ", Excuse me, how can I solve this problem👧

linashu
Автор

nice... it's much lighter & faster ... it works perfectly how can I make details more constant & less changing randomly for example character's hair color & clothes keep changing?

omarzaghloul
Автор

that`s awesome, could you tell me what your CPU, GPU, RAM ?

MisterCozyMelodies
Автор

Greetings from Russia. I loaded the video for 3 seconds, and the video harvester shows 1 second of video. How can I increase the time from 1 second to 3 seconds? I'm writing Google translation!!!

edrsphu
Автор

is lcm animatediff possible with sdxl models?

phisn