LCM + AnimateDiff High Definition (ComfyUI) - Turbo generation with high quality

preview_player
Показать описание
We explore in this video how to use LCM (Latent Consistency Model) Lora, which promises to speed up image and animation generation by 10 times.

#animatediff #comfyui #stablediffusion
============================================================

☁️ Starting in ComfyUI? Run it on the cloud without installation, very easy! ☁️

============================================================

While LCM, in combination with AnimateDiff, does it, the detail quality is not great. However, by just adding a 2nd K sampler with few steps, but can generate an amazing animation, as good as without LCM. The extra step is, of course, at the expense of some additional rendering time. However, we can increase the speed by 2 or 3 times.

I also show some comparison of critical parameters to use in the KSamplers, to optimize a find a better balance of generation time vs. detail. All these, using the Instant Lora method and the conditional masking method showed in previous videos.

The LCM models

Basic requirements:

Custom nodes:

Models:

Tracklist:
[TBD]

My other tutorials:

Videos: Pexels
Music: Youtube Music Library
Edited with Canva, and ClipChamp. I record the stuff in powerpoint.

© 2023 Koala Nation
#comfyui #animatediff #stablediffusion #lcm
Рекомендации по теме
Комментарии
Автор

It does a great job of making the lights disappear in the background.

ahmetab
Автор

Very good, I tweaked some stuff to meet my needs and it works great

kkryptokayden
Автор

Hello, sorry i didn't understand the command at 1:54 did you said ctrl+shit+b?

otakufra
Автор

Great work! Thanks for the workflow.
I can't make it work with with SDXL though.
What should be used in IPAdapter and CLIP vision nodes?

MrPerillo
Автор

how did you create the MlSD line sequences as well as depth sequences?

CHNLTV
Автор

I don't understand how the two ksamplers are used together, as in the video only the first ksampler (lcm) is connected to the vae decode, can you please explain?

yvann.mp
Автор

What should I put in the openpose and background controlnet folder?

aifreeart
Автор

Love your work! Koala - want to collaborate?

DopalearnEN