Mastering Video to Video in ComfyUI's Stable Diffusion (Without Node Skills)

preview_player
Показать описание
Creating incredible GIF animations is possible with AnimateDiff and Controlnet in comfyUi. Unleash your creativity by learning how to use this powerful tool without any node difficulties for stable diffusion.

Рекомендации по теме
Комментарии
Автор

Insane!
Thanks to share your knowledge 🙏🏻

budygang
Автор

Thanks for sharing and also giving us all the links in the description. Subscribed!

BlenderBob
Автор

Thank you for the video. Can't you do the 20-30 second video?

벤치마킹-fz
Автор

Thank you so much for the tutorial! I thought it would take me weeks to figure this out. I wish you good luck in everything!

IS-dqfw
Автор

Hey, I followed all the steps, don't have any error and when I click the Queue Prompt it have his process but in the end there is no final image

kon
Автор

Thanks for the workflow.
Will not adding LCM Lora to the workflow can i speedup the generation lot fast?

satyajitroutray
Автор

Thank you very much for your workflow and careful explanation, I decided to give it a try

zhiwei
Автор

Thank you so much for the content.
Is there any way to change the theme of the video without changing the appearance of the face.

harithagamagedara
Автор

i tried the method and i was amazed with its stability ز
the main reason that i stop making videos was the flickering issue, so I guess I'm back again :)
but I guess that this method work more like a filter more than actual drawing, is there any nodes that can be added so it can be used in more applications.
like for example change the person look and clothes, etc .. ?

gamalfarag
Автор

Hi, thank you for your tutorial! Everything is clear, but I have an error with Ksampler. Can you help me with that?

i_free_man
Автор

thanks man you explained very well, i was so confused

Dwoz_Bgmi
Автор

Hey is it OK if I don't have the weight and hight node in comfyu .when I put the workflow they were missing?

itssannabelle
Автор

Great explanation. Only the workflow link is not valid anymore. Could you please reuploud your workflow. Thanks a lot!

logman
Автор

Thank you so much! Sorry I'm new to the comfy ui, for some reason my generation stops at animate diff node. It is green and then it stops.

bubblerlek
Автор

Thanks. I ran it, but I keep getting this message "Error occurred when executing LoadImage". What should I do?

lilillllii
Автор

do you have a tutorial about how to train stable diffusion to generate similar videos to the video you give it as a source? TY

ADELTUF
Автор

What to do, if there are red lines around a node, it appeared after I qued the prompt?

discountcode
Автор

how solve this problem ?

Error occurred when executing VAEEncode:

'VAE' object has no attribute 'vae_dtype'

File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)

File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)

File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))

File "D:\AI\ComfyUI_windows_portable\ComfyUI\nodes.py", line 296, in encode
t = vae.encode(pixels[:, :, :, :3])

File "D:\AI\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 331, in encode
memory_used = self.memory_used_encode(pixel_samples.shape, self.vae_dtype)

elurudhfm
Автор

hey please help ! lineart model is not showing in the link

Dwoz_Bgmi
Автор

What is the VAE and what should it be used for?
What is the the Animate Path node doing? How do we know which one of the models "is necessary"??

This is unfollowable because you aren't covering the basics.

Bitcoin_Baron