IC Light Changer For Videos With AnimateDiff and ComfyUI

preview_player
Показать описание
IC light changer for Videos using ComfyUI and AnimateDiff

I post early access and other workflows like this here

_______________________________________________________________________________________

INSTALLATION:

Custom Nodes:

ControlNet Models:
Target Location: Comfyui\models\controlnet

IC model
Put in Comfyui\model\unet

AnimateDiff Motion Module:
Location: ComfyUI\models\animatediff_models

Other Models
2) FaceDetect: [It will be Auto downloaded during install]

Location Sam: ComfyUI/models/sams
Location FaceDetect: ComfyUI\models\mmdets\bbox

__________________ (Sources) ______________
Some are unknown...

___________________________________________________________________________________________
Timestamps:

0:00 - Renders
0:41 - Preview
1:00 - Installation
1:49 - IC Workflow - Part 1 - Vid2Vid
7:43 - IC Workflow - Part 2 - Passes
9:22 - Light Maps
11:00 - IC Workflow - Part 2 - Img2Img
14:30 - Credits

________________________________________________________________________________________
Music Credits:

_____________________________________________________________________________________

SEO:

IC Light
Light Changer comfyui
relight comfyui
Relighting video
Animatediff control net
Animatediff animation
Stable Diffusion animation
comfyui animation
animatediff webui
animatediff controlnet
animatediff github
animatediff stable diffusion
Controlnet animation
how to use animatediff
animation with animate diff comfyui
how to animate in comfy ui
animatediff prompt travel
animate diff prompt travel cli
prompt travel stable diffusion
animatediff comfyui video to video
animatediff comfyui google colab
animatediff comfyui tutorial
animatediff comfyui install
animatediff comfyui img2img
animatediff vid2vid comfyui
video 2 video
video to video
comfyui-animatediff-evolved
animatediff controlnet animation in comfyui
-------------------------------------------------------------------
Рекомендации по теме
Комментарии
Автор

Superb and clean tutorial. Also the attention to sharing all the links and files is THE BEST.

piorewrzece
Автор

This is awesome and very detailed, it saved me a lot of trouble, thumbs up and thanks for your hard work.

张辰-ro
Автор

omg it works! such a com[lex process, but very well organized and it actually works! thank you!

BuckwheatV
Автор

Thank you. I am trying this one. I have question. Whenever running each bach(50), little bit defference occurs. Any way to avoid this difference?

sam-ssrn
Автор

Error
Motion module 'motionModel_v01.ckpt' is intended for SD1.5 models, but the provided model is type SDXL.

ZainSarwar
Автор

bro, can you make liveportrait + vid to vid workflow? it will be awesome tutorial

davimak
Автор

Is it possible to change the lighting without changing the main subject in the video? it creates too many deformities and it's not really usable for professional work

tlevin
Автор

help.
!!! Exception during processing!!! Allocation on device
error msg

poptree
Автор

Help! First off, thank you so much for the tutorial. I can tell you put a lot of effort into not only the project it's self, but the recourses for sharing this with us. I got everything set up and working correctly and ran a few quick generations to make sure all the models were installed. I then updated my control net custom nodes and now, even when I revert to your original work flow, get the error: Error occurred when executing ACN_AdvancedControlNetApply: ControlBase.set_cond_hint() takes from 2 to 4 positional arguments but 5 were given - any ideas? Thanks!

calvinherbst
Автор

Can you use this workflow to create a video featuring a specific anime character?

MsParkjinwan
Автор

I'm not very familiar with IClight, can it be used with LCM?

eqgevmy
Автор

"Rebatch" doesn't work when loading long videos. "Load video VHS" still loads all frames into RAM and then it run out of memory. I have tried "Meta Batch Manager" with "Load video VHS" and "Video Combine VHS" which only generated discontinuous scenes. By the way, I have 32G RAM which can only load 20~24 frames to process. I'm still figuring out how to generate long videos.

rosederrick
Автор

Is there a way to adjust the strength of checkpoints on this node? I can't find denoising strength in Ksampler 😭😭

rctllcl
Автор

Hi i would love to make this workflow work for me but i got a couple problems the output is heavily altered and looks really trippy with me simply inputing a video with ur settings disabling all loras at the start and press queue, there are no errors but the output is nothing at all like the source footage, also with load cap set to 10 it outputs 5 frames only?

salomahal
Автор

I don't get why the file output node has a # symbol. Can I change it with a normal save path?

JosefK
Автор

Hi, when I started render, comfyui showed me an error message saying that " The size of tensor a (20) must match the size of tensor b (10) at non-singleton dimension 0" . I used chatgpt to fix it, and gpt kept to tried fixing execution.py codes which doesn't work at all. Have you had this kind of issue before? If you know how to fix it, I would really appreciate it. Thanks for your sharing.

Bemyself
Автор

the background changes too much even when it's off. I am not using a girl but a tennis shoe (I bypassed the face fix nodes), could that be the reason?

JosefK
Автор

Want to ask, which light source material are from where

emqqnvl
Автор

if you don't mind my asking how much vram does this use?

ESGamingCentral
Автор

It's so cool.
However, IC Raw Ksampler is experiencing an error.
"KSamplerAdvanced:
The size of tensor a (20) must match the size of tensor b (10) at non-singleton dimension 0"

How can I solve it?

byeongmokjang