ControlNet for SDXL 1.0! Master Your Stable Diffusion XL 1.0 Outputs with ComfyUI: A Tutorial

preview_player
Показать описание
Introducing ControlNET Canny Support for SDXL 1.0, especially invaluable for architectural design! Dive into this tutorial where I'll guide you on harnessing ControlNet to craft AI images via SDXL 1.0 within ComfyUI. ControlNet empowers you to use an input image, directing the pose, composition, and other facets of the resulting Stable Diffusion output in sync with your text prompt. For architectural design enthusiasts using Stable Diffusion, this is a game-changer. While official support for ControlNet within Stable Diffusion XL 1.0 is on the horizon, this video serves as your go-to resource for integrating and leveraging ControlNet Canny with SDXL 1.0 in ComfyUI

#controlnet #comfyui #stablediffusion #ai
Рекомендации по теме
Комментарии
Автор

You're awesome dude, thanks so much for these non-stop tutorial videos!

Moonajuana
Автор

Thanks for the detailed tutorial. It's very useful for me.

andriiB_UA
Автор

big thanks for that. clear and on the point.

backmanback
Автор

Can you pls tell how to get that live image generation preview in ksampler

flamescales
Автор

hi Arch ai 3D. Consistently create multiple angles of an architectural work in comfyui sd?

nguyenhuythach
Автор

That live image generation progress preview in ksampler efficient node has very bad quality for me, pls tell me how to fix it

flamescales
Автор

any solution for this error ?

Error occurred when executing KSampler:

Allocation on device 0 would exceed allowed memory. (out of memory)
Currently allocated : 7.20 GiB
Requested : 40.00 MiB
Device limit : 8.00 GiB
Free (according to CUDA): 0 bytes
PyTorch limit (set by user-supplied memory fraction)
: 17179869184.00 GiB

although it works normaly without controlnet

zyflpyr
Автор

Can we somehow use the result of the controlnet without running the preprocessor every time? I mean the way it is implemented in Automatic1111 - for the first time on the Txt2img tab we launch generation with the controlnet enabled and the selected preprocessor, then we remove the preprocessor and leave the controlnet enabled, and with each subsequent generation, the controlnet will use the result of its very first launch to build the image composition

modzha
Автор

Thank you so much. followed instructions and got this error Error occurred when executing ControlNetLoader:

module 'comfy.sd' has no attribute 'ModelPatcher'

File "F:\SDXL\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "F:\SDXL\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "F:\SDXL\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "F:\SDXL\ComfyUI_windows_portable\ComfyUI\nodes.py", line 577, in load_controlnet
controlnet =
File "F:\SDXL\ComfyUI_windows_portable\ComfyUI\comfy\controlnet.py", line 394, in load_controlnet
control = ControlNet(control_model,
File "F:\SDXL\ComfyUI_windows_portable\ComfyUI\custom_nodes\AIT\AITemplate\AITemplate.py", line 347, in __init__
self.control_model_wrapped = comfy.sd.ModelPatcher(self.control_model, load_device=comfy.model_management.get_torch_device(),

any suggestions .. thank you

rexforwood
Автор

brilliant stuff, maybe be useful to also supply the workflow you used/made in pastebin, png etc -- thx, keep up the great work! ;)

PeteShakur
Автор

Is there going to be just one controlnet file for SDXL? I'm confused by the naming since for SD 1.5 there were different controlnet models for each - ie. control_seg-fp16.safetensors or and there was a .yaml file for each model too.

EHUTB
Автор

Very clear as always, thanks.
I would like to ask you: do you really think SDXL is worth it currently vs the well-optimized SD 1.5 checkpoints? I don't see big quality differences to be honest.

Enricii
Автор

Can you make a video showing interior design without changing the geometric structure of an empty room?

ufukkaynar
Автор

my computer is waiting a lot during processes, how can you not find a solution to this? can you make a video about it?

haliskurguyan
Автор

this really helped comfy finally click for me

ronnykhalil
Автор

is 3 hours render time too much for a nvidia 1070? am i missing some settings?

phatsua
Автор

does this work with Auto1111 for those of us who are Uncompfy?

polystormstudio
Автор

As soon as I see the ComfyUI interface, I know that I want no part of it. It's the opposite of comfy.

marcus_ohreallyus