Comfyui consistent characters using FLUX DEV

preview_player
Показать описание

This method is simple and uses the open pose controlnet and FLUX to produce consistent characters, including enhancers to improve the facial details of these characters.

ps.
The "Apply SD3" node has been renamed to "Apply Controlnet With VAE" in the latest updates. The process to find it remains the same, but the node has been renamed.

Shakker Labs huggingface:

@Reverent elusarca Controlnet Character Sheet:

Helpful Videos:
FLUX Upscaling Techniques:

GGUF FLUX Comfyui:

How to Use Detailer:

How to install manager:

More Flux Videos Playlist:

*There are affiliate links here, which means that when someone makes a qualifying purchase, I get rewarded. You won't pay anything, and it supports this channel.

#comfyui #flux #Fluxcontrolnet #FluxDev
Рекомендации по теме
Комментарии
Автор

This is by far the best explanation of setting up Flux + ControlNet I have seen so far, since you actually explain everything rather than just "here's my over-complicated workflow!". The node layout is so nice and clean. You did more than enough to earn a sub and a like from me. Keep it up!

maximinus
Автор

Thanks for not just showing a final workflow but explaining each node. This is what makes your videos so great.

bigironinteractive
Автор

I've been through a few of your tutes so far, and I am just floored by your expertise, your delivery and to add your workflows work! Not like some others where it's all just smoke and mirrors and when you use their workflow you soon find out it was made just for the show. Not you! I am thrilled to have found your channel. Thank you!

noNumberSherlock
Автор

I usually never comment but this is really helpful video man, you explain everything so perfectly, god bless

nikolaprokic
Автор

Amazing, concise, understandable. Congrats man, keep the good work.

kajukaipira
Автор

So helpful. Thank you for starting fresh and walking us through each step. Definitely earned a sub.

pizza_later
Автор

Much love from South Africa! Thank you for this video!!! I'm busy making a short horror movie for fun using Flux Dev and KLING to do image-to-video, and this is EXACTLY what I need! Because I need to make consistent characters but I only have 1 input image of the character as reference. Man I didn't know they had a character pose system for flux yet THANK YOU!!! :D this needs to be ranked higher in google!

zoewilliams
Автор

Just wanted to say, you are amazing!!

Автор

thank bro ... i love the way your detailles all the process ... you are a Rock star, merci

jamessenade
Автор

omg bro, just what i need 🔥🔥 THANK YOU clear rhythm, working method

dbprisms
Автор

Thanks and it is nice to see a cleaner node layout, instead of a jumble of nodes and connections, which too many Comfy tutorial makers seem to love.

devnull_
Автор

Thank you! It’s good that you just tell and show what and how to do. Otherwise you can spend your whole life learning ComfyUI)). And so, in the process, in practice, it is easier to learn.

sergeysaulit
Автор

thank you very much for this tutorial... at the right speed and detailed explanation..

Gimmesomemore
Автор

Really good Explanation, Keep up the good work :)

cleverfox
Автор

This is amazing! Thank you so much. Subscribed!

ielohim
Автор

also for anyone experiencing an issue downloading the yolo model, you will need to go into the comfyui folder comfyui> custom nodes> comfyui manager and you will find a config file. you open in notepad editor and where it says bypass_ssl = False you need to change False to True and save. restart comfyui and you will be able to download the yolo model no problem

wrillywonka
Автор

Thanks so much for your hardwrok, very useful videos.

yangli
Автор

when doing the first queue prompt for the aio aux processor - i just get a blank black image

VryHgh
Автор

Hey! Great video, can you tell me how long approximately it should take to render an image at 5:26? I am using this workflow on my Mac with M3 processor but it takes forever to render, do I have to change my hardware? Can you recommend any good Windows based laptop for it?

MSTR_Piotrek
Автор

I find that if you add another generation step before to tell the AI to generate a design sheet for a mannequin, you can skip the part where you have to have an image loaded into the controlnet per-processor.

muggyate