ComfyUI: Area Composition, Multi Prompt Workflow Tutorial

preview_player
Показать описание
UPDATE: Please note: The Node is no longer functional the in the Latest Version of Comfy (Checked on 10th August, 2024). Either I will make a new tutorial or a new node itself, but that will take time. As of now you can still use area conditioning within comfy UI, however there is no visual aid for the co-ordinates and palcement.

This is a comprehensive tutorial on how to use Area Composition, Multi Prompt, and ControlNet all together in Comfy UI for Stable DIffusion. Area Composition allows you to define prompts within different areas of an image, giving you full control over the composition of the image. In this tutorial, I go through 4 workflow examples explaining how to use Area Composition effectively.

------------------------

Relevant Links:

------------------------

TimeStamps:

0:00 Intro.
1:00 Custom Nodes.
1:51 Workflow 1.
21:07 Workflow 2.
25:11 Workflow 3.
27:18 Workflow 4.
Рекомендации по теме
Комментарии
Автор

Please note: The Node is no longer functional the in the Latest Version of Comfy (Checked on 20th August, 2024). Either I will make a new tutorial or a new node itself, but that will take time. As of now you can still use area conditioning within comfy UI, however there is no visual aid for the co-ordinates and palcement.

controlaltai
Автор

This is a masterclass. Mind blowing. Joining the members area right now, your content is absolute gold.

gabrielmorod
Автор

great tutorial. easy to follow along.
well prepared (don't think people don't noticed all the work looking for the correct seeds for the examples)
and it actually covers more than the main topic.
great job guys (n_n)b

carlosmeza
Автор

great tutorial, appreciate your time... I learned a lot, the only thing is; nodes displacement may follow the process order to make more sense instead of packing them in a compact area.

hakandurgut
Автор

does the visual area conditioning custom node no longer exist? I can't find it while searching in my manager...

Unstable_Stories
Автор

Thank you for the tutorial! I would like to know how to do the same thing as in the fourth workflow but using IPAdapter FaceID to be able to place a specific person in the frame. I tried, but the problem is that the inputs to MultiAreaConditioning are Conditioning, while the outputs from IPAdapter FaceID are Model. How can I solve this problem? I would appreciate any help.

ai_gene
Автор

Does this not work with SDXL? It popped off for 1.5, but it doesn't seem to work for the newer models of SD. Edit: I figured it out, the sdxl model I work with is trained on clip skip -2, and setting the clip skip to that breaks the entire node.

Douchebagus
Автор

It's funny how this method is basically what you do when doing this by hand.

ImmacHn
Автор

Great workflow and great video.

Although, has this process stopped working now? When I add the multi area conditioning node, it doesn't have the grid and can't seem to add extra inputs. I saw that its been abandoned. Anyone else having this issue?

RompinDonkey-bvqe
Автор

Really love your workflow! Subscribed ;) Would love to use track anything model to mask out characters, use controlnet to modify background, the resample all the image/sequence.

MPxls
Автор

OMG! This was the best tutorial! thanks a lot!

enriqueicm
Автор

This is great, I would love to see this with Stable Cascade and Ipadapter. Being able to have regional control, global style based on an image, and then minute control over a specific area with ipadapter as well would be about everything that I would need in a workflow. (Maybe the addition of an upscaler). But that would be powerful.

freshlesh
Автор

Such great content 👏❤.. this was very helpful.. Thank you so much for creating this tutorial 😊... Looking forward to more such videos

TouchSomeGrassOnce
Автор

well described and explained,
but can this be mixed with instantID to insert a consistant character into the image, like the portrait workflow, but using instantID to have same face and such?

AlyValley
Автор

love the voice ai, how did you set it up ? want to use it to hear poetry

agftun
Автор

Great tutorial but a little bit challenging technique as you are using an SD1.5 model to generate 1024x1024 image. As a result, with every pass we can see that new artifacts are being added to the input image. If you want to increase the detail while remaining loyal to the input image, a better way of doing this is either using a model trained to produce 1024x1024 images or do tile upscale. Informative video overall tho, thanks!

aybo
Автор

Awesome tutorial, thanks! But i'm unable to find the visual area composition custom node when i try to install it. Was it removed?

stijnfastenaekels
Автор

Just amazing. Thank you so much for this!

tigerfox
Автор

thanks for this! But when i add my second image, it renders garbage where the character is supposed to be (the background 0 renders fine). Any idea how to fix?

jasonkaehler
Автор

this node's been abandoned and doesn't work anymore. Would you please suggest a suitable replacement ??

ambtehrani