Stable Diffusion Animation ComfyUI Workflow Update With Video Detail Enhancement

preview_player
Показать описание
In this tutorial, we explore the latest updates Stable Diffusion to my created animation workflow using AnimateDiff, Control Net and IPAdapter. We've introduced the Image Detailer for enhancing our output. You can easily download this custom node from ComfyUI Manager, available in both Comfy UI Impact Pack and Comfy UI Inspired Pack. The tutorial covers settings with Stable Diffusions 1.5 LCM, particularly Lora's model, emphasizing its checkpoint model.

About This ComfyUI Workflow :

The workflow showcases three different video outcomes, each with varying clarity and focus on facial and overall character details. A key improvement is the addition of DW Pose as an alternative to Open Pose in the Control Net sections, addressing issues of hanging and freezing. The DW Pose offers a faster and more responsive process, contributing to quicker loading times. The tutorial also demonstrates the usage of the Detailer for both face and person detection, providing enhanced color, sharpness, and character optimization.

Lastly, the tutorial highlights the usage of IP Adapter with Dream Shaper iconic style images, demonstrating how different sampling methods impact results. The DW Pose's detailed finger and facial recognition capabilities are showcased, emphasizing its advantages over traditional Open Pose. The workflow update, along with the necessary files, will be made available on our Patreon community and the Open Art Contents page.

Download and elevate your animation video!

If You Like tutorial like this, You Can Support Our Work In Patreon:

#stablediffusion #comfyui #animatediff #comfyuiworkflow #aianimation
Рекомендации по теме
Комментарии
Автор

Giveaway: I am sharing this workflow on OpenArt, I believe my Patreon don't mind this.
Because Stable Diffusion is such an amazing Open Source community, with workflow I hope everyone are able to make a good vid2vid animation without question.

TheFutureThinker
Автор

could you please give me the link to download clip vision ?

putragloh
Автор

when I run the images through the detailer I get a lot of flickering to where the detector was set. Meaning if I set it to face I get a lot of flickering around the face area and when I choose "person" it flickers around the person. How can I avoid that? I tried using different samplers, more steps, less steps, changing the cfg, changing the schedueler but nothing seems to help.

It also seems like the inpainting for the detailer is in a fix position. Meaning if the person moves up or down the inpaiting stays at the same place and doesn't change for each image

jonaskilian
Автор

How do I get close to the photo clothes of the ip adapter?

aifreeart
Автор

Wow! Thats great! Improve a lot from the last one.

kalakala
Автор

Benji you are killing it with your fantastic content. Keep up the good work

DailyProg
Автор

How do you make the talking person animation + sound?

soulxslayerchan
Автор

Great video! Is there a way to get an image as a background for the video? An static background

dejuak
Автор

damn man where were you before! so glad i found your channel!

VirtualRealityStudio
Автор

Any tips on hpw you keep a consistent background or are you overlaying the animation on a background afterwards?

thomasmiller
Автор

Hi Benji thanks for sharing your workflow which allows me to create such amazing animations. I did not think it would be possible to achieve this kind of result with only 8Gb vram. I’m really impressed and intend to use it more.

RhovanArahael
Автор

is it possible to apply sdxl turbo to these workflows, to speed up render times?

SeanieinLombok
Автор

the import of the reActor Node failes everytime. Do you know what i can do?

KILLEGAH
Автор

amazing work!! does this workflow have a face swap element? would be perfect for me, cant wait to try this.

sudabadri
Автор

Great Tutorial As always!

Did you try the new "unfold_batch" setting in ipadapter?

it supposed to make the video more consistent

donutsprinkles
Автор

I've been using this for a while. It's working with animatediff v3. It seems a bit better than temporal diff. I've added the new lora Domain Adapter just before the LCM lora, It's working but I have yet to see if it improve anything.

Ethan_Fel
Автор

I've been working a lot on this, especially to be able to increase the length of the video (targeting a 1024*1024 3min video atm). I had a lot of ram issue loading a huge number of frame with the load video node.

Instead I'm using the Load images from advanced controlnet (marked Deprecated atm) and using the image sequence instead of the video.
Both the amount of ram and vram consumption is down. Around 1.5gb of vram ( 16.5 to 15) and 34-37gb for 1024 frame in ram instead of saturating 64bg and 60gb of swap haha.

I have yet to try Load Image List From dir (inspire), since it's not deprecated.

It's also useful for the detailer, I've split the process in two. That way I can more easily try to find better detailer settings.

Comfyui could probably do the video -> image sequence with load video and save images nodes.

Ethan_Fel
Автор

The new workflow is not loaded with the faceswap reactor after updating all =( Do you know what the problem could be mate???

djalanleal
Автор

SyntaxError: Unexpected token I in JSON at position 4

tatnapopova