ComfyUI Infinite zoom step by step guide

preview_player
Показать описание
Welcome to this video tutorial where I take you on a step-by-step journey into creating an infinite zoom effect using ComfyUI. This tutorial leverages the Impact Pack extension and ControlNet inpaint feature, demonstrating the process in an easy-to-follow manner. Additionally, we'll explore how to convert images into a smooth movie using ZoomVideoComposer-master. Whether you're a seasoned user or new to these tools, this tutorial offers insights to help improve your skills. Don't forget to subscribe for more tips, techniques, and tutorials!

#stablediffusion #comfyui #ai #zoom #infinitezoom
Рекомендации по теме
Комментарии
Автор

Thank you very much. Your content is very valuable !
I was just looking for outpainting workflow, but I got much more.

pim
Автор

Very Informative and interestin. Thanks man!)

erdmanai
Автор

As with all your videos this is super helpful! You are covering topics that no one else has shown! Thanks for that! I am working on a workflow that is very complicated an will use some of these concepts. My workflow copies several of the concepts from the Deforum extension in A1111. I am using a sequence loader that can output a frame number. What I wish to do is to automatically change the prompt during batch processing based on the frame number. I wonder if you have thought about this and have any ideas! Thanks in advance for any response.

AIMusicExperiment
Автор

Thank you, now i can place a sofa in space. Seriously, very good Tutorial! Is there an inpaint controlnet model for sdxl meanwhile?

Radarhacke
Автор

KSampler (Efficient) mat1 and mat2 shapes cannot be multiplied (154x2048 and 768x320). I have always this error but I use the same values than you, why ?

DigitalAI_Francky
Автор

Just to save people's time, this is basically broken if you are using SDXL, or so it seems to me, the inpaint is incompatible, and you get an error. There are also a whole host of extra modules you need.

Geffers
Автор

Ièd love to see a Flux version of this!

marcdevinci
Автор

Hi! Great video, you explain everything really well, however I am getting stuck at 9:14 just when you generate the second image. I am very new at stable diffusion so I might be missing something. What I think happens is that the inpaint is not working properly, because I get a cropped image but instead of having the new "image generation" around the cropped image, I get nothing. Just a grey rectangle nothing else. Can anyone please help?

kateavisan
Автор

Great job, I'm new to this and don't know how to do the nodes. Did you upload the JSON file?

renegabrielpomacondori
Автор

To have good result, I needed to change empty_latent_with (height) to 1024 (will generate 1024x1024 output and zoomvideocomposer seem to prefer this) and feathering to 0. I would like to know how to handle well feathering vs zoomvideocomposer .... (I use also -m 0 with zoomvideocomposer)

checksummaster
Автор

Thanks for sharing this. I'm not so lucky in blending the squares. They are still pretty visible and the original square seem to get darker after every outpainting pass. Any tips how to improve this. The feathering doesnt fix this for me.

elowine
Автор

the images i create in comfyui are free of the hard square frame, but when processes thru zoom, the square is dominant. any ideas?

wronglatitude
Автор

I am following the instruction but getting this error message: Error occurred when executing Efficient Loader:

not enough values to unpack (expected 4, got 3)

File "C:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "C:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "C:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "C:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\efficiency-nodes-comfyui-main\efficiency_nodes.py", line 131, in efficientloader
vae_cache, ckpt_cache, lora_cache, refn_cache = get_cache_numbers("Efficient Loader")

PedramShokati
Автор

The squares are very seamless in comfy, however Video composer shows hard square lines... I played with the -m margin float but that hasn't fixed it.. Do you have any suggestions I could try to fix this?

mkthedoctor
Автор

Where did you get the inpaintPreprosessor node?

MD-n
Автор

I'm not getting the iterative update of the image generation in the KSampler(Efficient) like you are when enabling preview_image. It only shows up when the generation is complete. Do you have some other plugin installed for that?

realthing
Автор

Awesome work but I can't find most of those notes in my ComfyUI. I am not sure those need to be installed separatly.

PedramShokati
Автор

Thanks for your sharing! I noticed that when you generate the image, the preview image also showing the results during each step, can you tell me how to do that?

kenlinkenlin
Автор

i've run successfully, but the ultimate video is not perfect, i mean each image generated by comfyUI when merge together is not seamless and render well, it's so obvious when you can see the borders of every image when zoom out and zoom in

frankchieng
Автор

Had a bit of trouble with the imageSender/Reciever and I have 4GB VRAM, so took 10 minutes per frame! Worth the wait.

Satscape
join shbcf.ru