Video2Video in ComfyUI [AnimateDiff] - IV

preview_player
Показать описание
A Cinema4D clones animation of a galloping horse fed into ComfyUI + SD 1.5.

Using the Unsampling/Resampling technique, IPAdapter and ControlNet (Sketch&Toon lines render) I experimented with a fairly wide range of output styles from a single input video loop.

The Unsampling/Resampling technique allows to create frame by frame latent noise based on the input video, yielding some fairly consistent results considering these are all based on a single generic input video.
Рекомендации по теме
Комментарии
Автор

Can you share workflow, as I can't see properly in photo

PraveenKumar-uiwf
Автор

Any workflow where to experience this type ? Please

mariomaffiol
Автор

Hey mate! I built the scheme working pretty same as yours. Everything is ok, but I can't achieve a black background on my generations. In both ways, using RGB static mask with attention area in RED, and GREEN on back; or using color-to-mask node, processing a sequence of b/w frames (with invert parameter, and 0 values in all color channels), I see that it works talking about shape of objects. But it still trying to generate some color flashes on background. Using 2 IP-adapters, I also tried to put a 4x4px black square as source image. It works)) It make this area darker and desaturated, but still it generates some blured flashes and blured patterns of main 1st IP-adapter source image. I would be very grateful if you could tell me the secret of your black background)). Thank you, and sorry for my english.

litergross