Stable Diffusion ComfyUI Creation Animation For Tiktok Dance Videos (Tutorial Guide)

preview_player
Показать описание
Stable Diffusion Animation Create Tiktok Dance AI Video Using AnimateDiff Video To Video, ControlNet, and IP Adapter.

In today's tutorial, I'm pulling back the curtains on how I create those mesmerizing Tiktok dance videos using incredible Custom Nodes of Stable Diffusions ComfyUI.

Hey everyone, in today's video, we're taking the ComfyUI AnimateDiff flickering-free workflow to the next level by adding new custom nodes. We'll be using a boxing action stock video footage as our source and exploring two methods to enhance the animation quality.

First, we'll discuss using an Upscaler to improve image quality. By connecting the VAE decode output to a high-quality Upscaler model, we can enhance each frame of the animation. This method not only upscales the video size but also improves color and sharpness.

Next, we'll dive into the second method, which involves integrating the IP Adapter and adjusting the sampling settings. By tweaking the sampling step and denoising values, we can achieve even better results. We'll explore different settings and models to create unique and captivating animations.

So, whether you're looking to enhance image quality with an upscaler or experiment with sampling and denoising for better results, this video is packed with valuable insights. Make sure to subscribe to our channel and leave a comment if you have any questions or suggestions. Let's get started and take your AnimateDiff workflow to new heights!

Timeline :
00:00 Intro
00:34 About ComfyUI Workflow For Animation
00:53 Connect LCM Lora Model Node In ComfyUI
01:44 Download & Install LCM Lora Model
02:35 Execute The Prompt To Generate Animation
03:40 Upscaler For Animation Frame
05:57 Run LCM Lora Model Without IPAdapter
07:43 Optimize LCM Lora Animation Using IPAdapter
09:06 Optimized Animation Result

If You Like tutorial like this, You Can Support Our Work In Patreon:

#AIGeneratedAnimations #animatediff #tiktokdance #stablediffusion
Рекомендации по теме
Комментарии
Автор

You show me something that I have been waiting 2 years every day. I have an idea to become a booger with AI whatever 😊

motionislive
Автор

Stable Diffusion Animation Create Tiktok Dance AI Video Using AnimateDiff, LCM, ControlNet, and IP Adapter.

TheFutureThinker
Автор

From image, few seconds animation and now this girl dancing. You are making this girl like real😂

crazyleafdesignweb
Автор

Any tip for what I should do about how blurry it gets with larger batch sizes? Batch x20 = no blur but batch x150 and x300 blurs a lot.

YooArtifical
Автор

You did another great animation tutorials again👍 great work!

kalakala
Автор

Good improvement tho but i saw a system could motion cap and hits different. I hope Sd will improve even more

markdavidalcampado
Автор

Bro, awesome video, I am still learning basic stuff in Comfyui and came up to your channel. I have RTX3070 8Gb VRam. Will it be enough for this kind of videos or I need to upgrade? Thanks again for sharing this.

kiseli
Автор

Smile, your viewer will end up happier

gingervela
Автор

Hi, thanks for sharing your knowledge.together we can curb stable AI animation faster. There is a node, and to it models for LSM and others. The node is called "ModelSamplingDiscrete." As far as I understand the models to this node are already installed when downloading.Only I do not know in what package this node. For the 1.5 model, the "eps" model is a good one
Good luck with the generation my friend.

michail_
Автор

Hello, Thank you for sharing your workflow. It is fascinating. I am getting video results but they are all extremely out of focus and the openpose pass is comped on top of where my dancer should be. Do you any idea where I might be going wrong?

grpbyme
Автор

can i download the workflow somewhere?

itstengz
Автор

Amazing stuff, subscribed immediately, just wondering, which graphics card you need for this? my humble 8VRAM is a little low it seems

popeye
Автор

Hi there ! Do you put your full repository on patreon so I don't have to do the full setup ? Thank you for the tutorial !

Techreviewfr-yrie
Автор

To use a trained lora, would i have to add a Lora node or use 1111 way and add it to the conditional prompt?

RhapsHayden
Автор

Your change log said "Add GET/SET Nodes making the diagram more clean" but now I get error "missing nodes GetNode SetNode" What module do I need for this? If you could updated the needed custom_nodes on your openart ai article, I think that'd be very helpful. I've gone through every custom node you've said is needed for your workflow, they are all installed and active, but GetNode and SetNode are still missing.

daniel_britt
Автор

Phenomenal tutorial, really impressive nad easy to follow are you on RTX 4090? What are you time scales for rendering?

SeanietheSpaceman
Автор

Amazing tutorial, is it possible to change the clothes or the background for example?

edsonjr-dev
Автор

Do you have the full comfyui workflow that you used in this video in your Patreon?

moemrizzle
Автор

Most of the things in this tutorial I can search for the filename, but Load Clip Vision SD1.5\model.safetensors I cannot find, please help.

LucidFirAI
Автор

Hi, Does this work with SD webui? Could you please make a video using webui? Thank!

min_h