Install Flux locally in ComfyUi with Low VRam

preview_player
Показать описание
In this video, I’ll show you how to Install Flux locally in ComfyUi on very low-end systems, even on computers with weak hardware. This method is significantly faster than the original Flux models, making it perfect for those who have weaker systems or graphics cards that struggle to run the larger models. I’ll guide you to Install locally Flux GGUFF version in ComfyUi, an optimized version that has been successfully run on systems with as little as 2GB of VRAM!

We’ll cover everything from installation, setting up ComfyUI, and downloading the necessary models, to making sure it all runs smoothly. I’ll also explain how to download and install the necessary Text Encoders and VAE file to ensure top performance, even on weaker systems. Plus, I’ll show you how to use LoRAs and test their performance on both low and high-end models, comparing the results. Both self made LoRAs and general LoRAs.

If you're working with a weaker system, this video will show you how to get the most out of Flux and GGUFF, even with minimal hardware. Let’s dive in and get you started!

In this video:
- How to install and use GGUFF Flux on low-end systems
- Setting up ComfyUi and optimizing performance
- Comparing different versions of GGUFF models for weaker systems
- Using LoRAs to enhance image generation

Links:
(workflows zip file, also include an image to image upscaling workflow for Flux GGUF models which is very fast. make sure to download the zip file)

Don’t forget to like, comment, and subscribe for more tutorials and tips!
Рекомендации по теме
Комментарии
Автор

you are making perfect videos for Flux

fahimabdulaziz
Автор

that was very detailed and clean so thank you so much

legatoasdi
Автор

Can this system work with an Nvidia 1080 graphics card with 8GB VRAM?
16GB RAM?
With the 4 GUFF model, it also crashes while working with ComfyUI.

vahid
Автор

None of the links seem to be working to download the workflows or the vae files. Can you advise please?

robertwilson
Автор

standard setup, static like noised image. On upscale It says you have to Reconnect The ultimate upscale and you cannot match the numbers that you have on the original node. if I use default I get : C:\Users\walte\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\_tensor.py", line 1443, in __torch_function__
ret = func(*args, **kwargs)
RuntimeError: start (24) + length (1) exceeds dimension size (24).

WallyMahar
Автор

You should use a fixed seed in order to compair two models results

crypt_exe
Автор

where is clip_l link i cant fine anywhere ? please provide the link

souravroy