Setup Flux for Low VRAM (12gb Workflow) | ComfyUI Tutorial

preview_player
Показать описание
Here's a breakdown of the 12gb VRAM ComfyUI Flux workflow by @Inner-Reflections-AI - available on @civitai

12gb VRAM Flux Workflow by @Inner-Reflections-AI

Flux First Look Livestream with @midjourneyman

UNet Models

/ComfyUI/models/unet/
Flux.1-dev (25 step slow model)

Flux.1-schnell (4 step fast model)

Clip Models

/ComfyUI/models/clip/
(You can get this from Install Models in the Manager, or use the link below)

VAE Model

/ComfyUI/models/vae/

Bookmarks
00:00 - Intro
00:29 - Models (Unet)
00:53 - Models (Clip)
01:16 - Models (VAE)
01:37 - Flux.1-Schnell (4 Steps)
02:18 - Flux.1-Dev (25 Steps)
02:30 - Comparison Dev vs. Schnell
02:49 - Flux Guidance
Рекомендации по теме
Комментарии
Автор

I like quick. Will download tonight following this guide. It looks good. Thanks for sharing

Fakery
Автор

Awesomeee Purz! As always. Really appreciate your tutorials.

stefaneisele
Автор

Split Sigmas
via Pi
**Split Sigma:**
Split Sigma enables precise control over the noise level in the image generation process. By adjusting the standard deviation (sigma) values for different parts of the model or at different stages, Split Sigma allows for more fine-grained control over the diffusion process. This can lead to improved image quality and a better understanding of the role noise plays in the generation process.

MilesBellas
Автор

Hello, thanks for the great tutorial. I set up both Schnell and Dev on my PC with a 3060 and it works. Dev takes its time, but it's OK, results are great.

AndreaUngaro
Автор

ERROR: DualCLIPLoader - Type "flux" not in list (can only select "sdxl" and "sd3", cannot be set to "flux")
UPDATE: Solved by updating ComfyUI via the manager 👍

MikevomMars
Автор

i did download the example but its a different node construction

RodrigoSantos-gwmw
Автор

got another way working but gonna try this. Any idea how to get inpainting working with flux on 12gb vram?

markdkberry
Автор

Your workflow is great, the bug that slows down the process is probably mine. On the terminal you can see this message before generation:
UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at
out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False)"
How can I fix this? Please give me simple advice, I'm a beginner!

Автор

My Surface Laptop Studio has 4GB VRAM, and while I want to build a computer to play with these new tools, I'm willing to wait for the GTX 5090 to hit the market. In the meantime, I was wondering if you knew anything about running these things through a GPU Cloud such as TensorDock?

Quantumspace
Автор

I follow what you did for the flux guidance, however I get the same exact image no matter what guidance I use (as long as I use the same seed). Shouldn't the guidance affect the image?

eekeek
Автор

The moment i click "Queue Prompt" the comfyui freeze and "Press any key to close" appears and it closed, is my rig not enough to run this?
CPU: I9 10850k
GPU: 4060 8gb
Memory Ram: 24 gb

theironneon
Автор

Do you know how long it takes to generate an image with the dev model with only 12 GB of VRAM?

rennynolaya
Автор

i have a rtx 4070 and 32 gb ram and i7 13th gen but still it takes 24 sec to generate a image of 1024x1024 is that normal ? (edit : and its in the schnell model not the dev model)

thrWasTaken