AnimateDiff Tutorial: Turn Videos to A.I Animation | IPAdapter x ComfyUI

preview_player
Показать описание

Transform your videos into anything you can imagine.

⚙️Setting Files:

Models:

➕Positive Prompt: ((masterpiece, best quality)), Origami young man, folding sculpture, wearing green origami shirt, blue origami jeans, white origami shoes, depth of field, detailed, sharp, 8k resolution, very detailed, cinematic lighting, trending on artstation, hyperdetailed
➖ Prompt: (bad quality, Worst quality), NSFW, nude, text, watermark, low quality, medium quality, blurry, censored, wrinkles, deformed, mutated

🔗 Software & Plugins:

©️ Credits:
Sock videos from @PexelsPhotos

⏲ Chapters:
0:00 Intro
0:24 Install ComfyUI
1:31 Base Workflow
1:54 Install missing nodes
2:22 Models
4:23 Settings
10:36 Animation outputs

Support me on Patreon:

🎵 Where I get my Music:

🎤 My Microphone:

🔈 Join my Discord server:

Join me!

hashtags...

Who am I?
-----------------------------------------
My name is Mohamed Mehrez and I create videos around visual effects and filmmaking techniques. I currently focus on making tutorials in the areas of digital art, visual effects, and incorporating AI in creative projects.
Рекомендации по теме
Комментарии
Автор


I've added some solutions and tips, the community is also very helpful, so don't be shy to ask for help 😉

MDMZ
Автор

Fixes for current version - June 2024:

IPAdapter won't work in folder 'ComfyUI_IPAdapter_plus' => Go to 'ComfyUI\models' folder and add a folder named 'IPAdapter' and place the IPAdapter plus there. (Now you can find the IPAdapter)

Next you'll still have two red nodes in video reference and keyframe. Change those with 'IPAdapter Advanced' nodes (double click to search for nodes) and link the lines to these new nodes then remove the old ones. Make sure all connections are made like on the broken nodes.

SarovokTheFallen
Автор

Stuck on Ksampler for more than 2 hours. Using the second workflow for problems with IPAdapter. Windows 11 i7-11800H 16GB RAM RTX 3070. On terminal "loading on lowvram mode 64.0." Already Switched to pytorch_model.fp16, and reduce the original video resolution to 10s 720x1280px 25fps

rafaelfreixedelo
Автор

Thank you for your clean and helpful video. I tried to run this on my local machine but unfortunately I do not have enough vram. Do you have any recommendation on any cloud service?

sylvansheen
Автор

Hi, ive finnsly managed to get this to work after countless hours of tweaking. I was wondering if theres any way to randomize the art style and model throughout the video generation? Like you luma+warpfusion ai video?

MPandini
Автор

One idea to improve vídeo background. Try remove background first. Then apply a specific node only for background generation to avoid flickering. if you see flickering on hands you may use a technique by creating a boundering box that stylizes only hands and uses any hands detailers tools (lora, or node).

KodandocomFaria
Автор

I want to try this, thanks man, love your channel

SinnerSam_
Автор

hey i have rtx 3080 10gb and im stuck on :
Requested to load AnimateDiffModel
Loading 2 new models
0%| | 0/24 [00:00<?, ?it/s]

sanjitfx
Автор

I experienced that the ip adapter loader was not working properly, because it was not finding the path. I solved it by specifying the path as below, in case anyone else is having trouble with this issue.

ComfyUI / models / ipadapter /

You will need to manually create an ipadapter folder under the models folder.

doi
Автор

Thanks for the video! Most creators forget, to show which models they got and where to put them in the ComfyUi folder. This step by step video helped a lot.

MaximusProxi
Автор

the workflow does not work. I've been trying all the afternoon. no matters the values I always get in the sampler exactly the same video without any change from the prompt. 😥

FranckSitbon
Автор

When loading the graph, the following node types were not found:
IPAdapterApply
Nodes that have failed to load will show as red on the graph.

rayzerfantasy
Автор

thank for ur great tutorial.is there any limition for frame rendering? i use ur workflow for a 32 seconds video file and its like 30 frames(1000 png) and i got this error after 1 hours render time on my 3090ti: Unable to allocate 6.43 GiB for an array with shape (976, 1024, 576, 3) and data type float32

ehsankholghi
Автор

KSampler stays at 33% Although I waited for 4.5 hours, it still did not work at the same steps 30, I tried at 25, it is the same again, the last time I was able to run it at 9, it also stayed at 33% Is there a solution?
System: Ryzen 5 3600/gtx1070Ti 8GB/16GB 3200mhz Ram/500gb SSD

OkanSoyluu
Автор

the future of art... downloading the newest hard to find files

Thanks for the tut, it was helpful I don't know how I would have figured out all those steps

florentraffray
Автор

I get this error and I don't know how to solve it : 'T2IAdapter' object has no attribute 'compression_ratio'

julianmartinezvfx
Автор

Do you need a video card for this? or can it run on Google Colab? Thank you

andrestamashiro
Автор

I tested it in a rtx 3070 8gb vram and 32 go ram it took 26 hours for a video of 12 seconds 😅😅

AI-nsanity
Автор

can anybody tell me what it is and how can i get rid of it



When loading the graph, the following node types were not found:
IPAdapterApply
Nodes that have failed to load will show as red on the graph.

ShubhamChaurasia-tjbj
Автор

bro i was getting midas deapth map ppreprocessor error please help me in solving that problem

krishnabhardwaj