Flux GGUF models are our favorite!

preview_player
Показать описание
If you are just getting into FLUX, check out this simple workflow

Download the workflow:

Flux GGUF Models:

Clip models:

Flux VAE:

Flux Realism LORA:
Рекомендации по теме
Комментарии
Автор

omg a tutorial that actually tells us how and where to do thing instead of telling us to go download and figure it out. great work

xiao_slayer
Автор

For a beginner also say that your videos are really very friendly, thank you very much. Because of my professional needs and the high learning threshold of flux, I've been using mimicpc to run flux before, it can load the workflow directly, I just want to download the flux model, and it handles the details wonderfully, but after watching your video, I'm using mimicpc to run flux again finally have a different experience, it's like I'm starting to get started! I feel like I'm starting to get the hang of it.

Huang-ujrt
Автор

Got it working. Quality is amazing. Thank You!

AI-Rogue
Автор

Generated in 100 seconds on my Nvidia 2080 Ti - works really well!!

shuntera
Автор

Best explanation and Guide GGUF right now. Can U describe the difference between various quantized models Such as Q8, Q4 ? Also, can it generate image in any resolution or does it have to specifically 1024x1024 ?

abrarrafat
Автор

Im so glad you didn't say "Is taking American by Storm like other videos are doing". 🤭

ovworkshop
Автор

You have got to be from Florida... Not asking to be sus, just I know your accent so well lol... I'm def gonna try out the GGUF!

lofigamervibes
Автор

Wow these ai voice filters are starting to sound pleasant to listen to

jlyn
Автор

Not recomend using GGUF with control net. It is way slower than safetensors. If I'm correct it's ~30% slower. Basic image generation with GGUF is same as with safetensors.

amrokas
Автор

Wow! Let's hope the models get optimized a bit more... Unfortunately, I work with just a CPU, and it takes me an hour to generate an image... 😑

magimyster
Автор

So with my RTX 3070 8GB took 7 minutes and that was without the upscaler, yep they really need to figure out optimization for these models, they are wonderful and great but they are so impractical for us little guys to use.

Avalon
Автор

Are this GGUF models useful also having an RTX 3090 with 24GB VRAM?
Maybe at the end of tweaking you can use the bigger models?

moebiusSurfing
Автор

I'm always hesitant lately to use these workflows with my RTX 3070 8GB machine, even more so I can't imagine tacking on a upscaler, I'll give it a try and see what happens and I also would like to know how you got the times on each node?

Avalon
Автор

How do you display the execution times on the nodes?

contrarian
Автор

Why there is Lora Key Not Loaded in the cmd prompt?

I got similar issue... I don't think lora is working GGUF yet.

bobtahar
Автор

Thanks for the video.
How can I have the seconds be displayed above the nodes?

gpr
Автор

I noticed there's a triple clip loader included in the gguf stuff, any good combo/use to using 3 over 2?

sorijin
Автор

Nice video but if you have 4090 and you need 40sec per generation something is not right. It should be more like 15sec per generation. I changed the t5xxl_fp8_e4m3fn to t5xxl_fp16 and it is faster.

DanielSchweinert
Автор

I am getting a missing custom node red box error for UNETLoaderGGUF and Any ideas where I can find these?

sulemanwaheed
Автор

How much laptop requirement is necessary to run comfy UI ?

Aitoolstation
visit shbcf.ru