Image to Video with Nvidia Cosmos in ComfyUI!

preview_player
Показать описание
Nvidia Cosmos has a set of "world" models designed to generate synthetic data for training... but it can do so much more than that, such as animating images and even image interpolation! Oh, and it can generate rodents too.

Run it at home, for free, on your own computer with just 12GB VRAM. ~7 minutes for a 704x704 121 frame video :)

Want to support the channel? -

== Links ==

== Beginners Guides! ==

== Flux.1 in ComfyUI ==
Рекомендации по теме
Комментарии
Автор

This is something that I have been saying from the start, pure learning from videos or images won't do, modeling a 3d world, now that's where it's at.

ImmacHn
Автор

hey man love your work, also your voice is amzing and calming.

CGFUN
Автор

This is quite amazing. Keep making more amazing video on open source image to video models. Amazing times!

TUSHARGOPALKA-njjx
Автор

7:45 i am trying to figure out since 3 hours, how you made this workflow. i wish i could download this or maybe better explanation :(

heyselcuk
Автор

I'm getting a speed of 423.54s/it with the default res and length at 20 steps. This with with 16gb vram. Why is it terribly slow for me?

Statvar
Автор

Good job, it works on my 3080 10gb very well (7B model). Nothing crashes due to lack of memory.

РоманСырватка
Автор

I can't wait for consumer NPUs to become available in 2-3 years from now, as GPUs are not scaling along with model capabilities, and I don't have much hope for optimizations that will make them viable for real-time local use (video game emulation).

OnigoroshiZero
Автор

Could you please create a tutorial on how to run this on Kaggle?

howtowebit
Автор

35 minutes for a very artifact-y 5 second render on a 3090ti that screams bloody murder the whole time? No, thank you. As long as you're not using it locally, it might be ok.

CHARIOTangler
Автор

Hi I keep getting a ksampler error: "Expected size for first two dimensions of batch2 tensor to be: [154, 768] but got: [154, 1024]." and I didn't alter the base workflow at all so not sure why this is happening

banzaipiegaming
Автор

thnx for video. Now we leaning, that Hunyuan is better

dddodin
Автор

The new CaPa paper for mesh generation mentioned it will fit onto ControlNet pretty well. Wonder if that's going to go crazy with 3d Printing or not. We only have the code rn, no demos, so might a "in a few months" thing

pauljones
Автор

Where do the videos save? I'm getting single pngs in the default folder but no video. Tried adding the Video Save node but that doesn't work.

HikingWithCooper
Автор

Love your videos so much! Can you make a tutorial video on FlexClip’s AI tools? Really looking forward to that!🥰

Cyrine
Автор

I just get the error not implemented for 'Half'

amakaqueru
Автор

I've been looking for a solution where we can identify keyframes, select the item/cloth we want to change automatically, then inpaint and finally interpolate between those frames.

The interpolation part is very interesting to me, I wonder what that would look like with similar keyframes

ControlTheGuh
Автор

everything from flux to SD has been worked out on my 4GB Nvidia since the early days using virtual memory, it's slow but it works.

synthoelectro
Автор

Bummer, for some reason it just crashes my ComfyUI...I have all the models downloaded and everything, using a 3090Ti. ... nevermind ... I forgot I was training a LoRA at the same time lol. Weird I didn't see any kinda message about running out of memory.

my_username_was_taken
Автор

Me and my mac M2 feeling that we are missing out.

DapperDuck
Автор

awesome video. the text to video worked right out of the gate, but the img to video is missing a custom node cosmosimgtovideolatent that the node manager does not see

boythee
welcome to shbcf.ru