ComfyUI - Hands are finally FIXED! This solution works with all models!

preview_player
Показать описание
Hands are finally fixed! This solution will work about 90% of the time using ComfyUI and is easy to add to any workflow regardless of the model or LoRA you might be using. This is utilizing the MeshGraphromer Hand Refiner, which is part of the controlnet preprocessors you get when you install that custom node suite. We can use the output of this node as well as the mask to help guide correction in any image. I also show some of the issues I ran into while working with this solution.
#comfyui #stablediffusion

Gigabyte 17X Laptop is doing the inference today! Grab one here:

You can grab the controlnet from here, or use the manager:

Interested in the finished graph and in supporting the channel as a sponsor? I will post this workflow (along with all of the previous graphs) over in the community area of YouTube.

Рекомендации по теме
Комментарии
Автор

Thanks for all your videos.
I was a little lost with all those nodes versions but, now, I'm starting to understand better how to use Comfyui

lennoyl
Автор

So cool that this works! Love the ingenuity that it must have taken to figure this all out.

joelface
Автор

The best video I have seen so far. Very clear and it gets to the point. Nothing to add. Thanks

byxlettera
Автор

Love this! Have learnt a lot from this entire playlist, thanks!

divye.ruhela
Автор

Awesome man, trying this now, your tutorials are great and easy to follow. A godsend!

gab
Автор

Hi That is very cool, and works well for me. Once again your explanations are clear and very simple to follow. As an old guy who learns best by reading these are great.

RichGallant
Автор

very clear and well explained, many thanks for sharing!

Marcus_Ramour
Автор

Very nice tutorial. Looks like compositing^^ so as a comp-artist, i love this workflow :)

kietzi
Автор

Thank you for all the useful information!☺

ai_materials
Автор

thank you! i'm getting great results using this...

DrunkenKnight
Автор

So grateful I 'm starting to understand how things flow in Comfy UI without feeling too lost. It sounded like Chinese to me a couple of months ago. Now it's like German. Still rough but somehow familiar. 😆 Thank you for this!

gimperita
Автор

Hi Scott! First of all I want to congrats you for yours amazing tutorials. Thank you!! Could you please create another version of this workflow where instead use prompt to create an image we will upload an image?

fabiotgarcia
Автор

This is great. Thank you for making it so clear and simple. Would you happen to have any videos on maintaining consistency of characters across multiple renders? Many situations require more than just one shot of a character but I find consistency almost impossible to achieve just by text alone.

davidm
Автор

Thank you for making a tutorial by building nodes manually, it really helps clarify each function of the nodes, unlike other channels which present workflows with ready-made nodes

rickandmortyunofficial
Автор

super specific use case, when the subjects hands are literally like the image your using, if not the depth maps it comes up with a straight trash.

gingercholo
Автор

Thanks for your video. You can use the global seed if you set the seed in an extra primitive node and fix it

murphylanga
Автор

Hello! Is there a way to integrate two json files with different functions in comfyui? One is to do the inpaint function, and the other is to maintain a consistent character through faceid, but I'm having trouble linking the two.

lilillllii
Автор

Thanks for this video. Have you tried to see if this works with SDXL workflows?

ysy
Автор

Great stuff! Do you know if there’s a community node for Invoke for this? I’m not sure how interchangeable or inter-compatible the nodes are.

dannyvfilms
Автор

AWESOME video. Quick question. where do you get the depth files for the load control module. Newbie here

ggenovez