FLUX: The First Ever Open Source txt2img Model Truly Beats Midjourney & Others - FLUX is Awaited SD3

preview_player
Показать описание
FLUX is the first time every an open source txt2img is able to truly surpass and produce better quality and better prompt following images than #Midjourney, Adobe Firefly, Leonardo Ai, Playground Ai, Stable Diffusion, SDXL, SD3 and Dall E3. #FLUX is developed Black Forest Labs and its team is mainly composed by the original authors of #StableDiffusion and its quality is mind blowing. When I say these words I am not exaggerating you will see that after watching the tutorial. In this tutorial I will show you how to very easily download and use FLUX models on your PC and also on Cloud services Massed Compute, RunPod and a free Kaggle account.

🔗 FLUX Instructions Post (public no need login) ⤵️

🔗 FLUX Models 1-Click Robust Auto Downloader Scripts ⤵️

🔗 Main Windows SwarmUI Tutorial (Watch To Learn How to Use) ⤵️

🔗 Cloud SwarmUI Tutorial (Massed Compute - RunPod - Kaggle) ⤵️

🔗 SECourses Discord Channel to Get Full Support ⤵️

🔗 SECourses Reddit ⤵️

🔗 SECourses GitHub ⤵️

Video Chapters

0:00 Introduction to the truly SOTA txt2img model FLUX which is Open Source
5:01 How we are going to install FLUX model into our SwarmUI and use it
5:33 How to accurately download FLUX models manually
5:54 How to download FP16 and optimized FP8 FLUX models automatically 1-click
6:45 Which precision and type of FLUX models are best for your case and what is the difference
7:56 Which folder you need to put FLUX models accurately
8:07 How to update your SwarmUI to latest version for FLUX support
8:58 How to use FLUX models after SwarmUI started
9:44 How to use CFG scale for FLUX model
10:23 How to see what is happening that moment in the server debug logs
10:49 Turbo model image generation speed on RTX 3090 Ti GPU
10:59 Somes turbo model may generate blurry images
11:30 How to generate images with development model
11:53 How to use FLUX model in FP16 instead of default FP8 precision on SwarmUI
12:31 What are the difference between development and turbo model of FLUX models
13:05 Generating native 1536x1536 and testing high res capability of FLUX and how much VRAM it uses
13:41 Image generation speed of 1536x1536 resolution FLUX image on RTX 3090 Ti GPU with SwarmUI
13:56 How to check if you are using any shared VRAM - this slows down generation speed significantly
14:35 How to use SwarmUI and FLUX on cloud services - no PC or GPU is required
14:48 How to use pre-installed SwarmUI on amazing Massed Compute 48 GB GPU for 31 cents per hour with FLUX dev FP16 model
16:05 How to download FLUX models on Massed Compute instance
17:15 FLUX models downloading speed of Massed Compute
18:19 How much time it takes on Massed Compute download all very best FP16 FLUX and T5 models
18:52 How to first update and start SwarmUI on Massed Compute with 1-click
19:33 How to use Massed Compute started SwarmUI on your PC's browser with ngrok - you can use even on your phone this way
21:08 Comparing Midjourney image to open source FLUX with same prompt
22:02 How to set DType to FP16 to generate better quality images on Massed Compute with FLUX
22:12 Comparing FLUX generated image with the Midjourney generated image for same prompt
23:00 How to install SwarmUI and download FLUX models on RunPod to use
25:01 Step speed and VRAM of Turbo model vs Dev model of FLUX
26:04 How to download FLUX models on RunPod after SwarmUI installed
26:55 How to start SwarmUI after you restart your pod or turn off and on your pod
27:42 If CFG scale panel of SwarmUI is not visible properly how to fix it
27:54 Comparing FLUX quality with very best models of Stable Diffusion XL (SDXL) via popular CivitAI image
29:20 FLUX image generation speed on L40S GPU - FP16 precision
29:43 Comparing FLUX image vs CivitAI popular SDXL image
30:05 Does increasing step count improves image quality significantly
30:33 How to generate bigger resolution 1536x1536 pixel image
30:45 How to install nvitop and check how much VRAM 1536px resolution and FP16 DType uses
31:25 How much speed drop happened when increase image resolution from 1024px to 1536px
31:42 How to use SwarmUI and FLUX models on a free Kaggle account same as on your local PC
32:29 How to join SECourses discord channel and contact with me for any help and discuss AI
Рекомендации по теме
Комментарии
Автор

🔗 FLUX Instructions Post (public no need login) ⤵

🔗 FLUX Models 1-Click Robust Auto Downloader Scripts ⤵

🔗 Main Windows SwarmUI Tutorial (Watch To Learn How to Use) ⤵

🔗 Cloud SwarmUI Tutorial (Massed Compute - RunPod - Kaggle) ⤵

🔗 SECourses Discord Channel to Get Full Support ⤵

🔗 SECourses Reddit ⤵

🔗 SECourses GitHub ⤵

SECourses
Автор

soluksuz izledim, teşekkür ederim hocam

cemilhaci
Автор

the best teacher, and he's also steamy and quick to take down new products

Михаил
Автор

Which reliable FLUX or SDXL checkpoint Models would you recommend for generating text to images of real life-like photographs of famous deceased people? Are you aware of any?

markschrader
Автор

Kralsın :) Bütün kanallardan daha detaylı ve ince şekilde anlatıyorsun <3 Ama Ideogram hala fluxtan daha iyi

Maeve
Автор

Thanks for your help Sir

Could you make video for How to deploy ROOP UNLEASHED on RUNPOD please

bilindmax
Автор

When we can expect Flux-Model and Flux-Lora training tutorials? :)

pastuh
Автор

Remarkable. It will be a real deal when someone or a team of devs will find a way to finetune this model + implement controlnet over it

moz
Автор

Great video! What is the software with the vertical slider you use to compare two images?

joechip
Автор

Amazing walk through. Can’t wait to ret it out on Massed Compute. I formerly also enjoyed following your OneTrainer tutorials for SDXL model fine-tuning. Is it possible to fine tune the new Flux model or if not how do you see the prospect of being able to fine tune it? Thanks for all your crazily detailed knowledge sharing practice 🎉

TheFerdinandGuy
Автор

Hi Furkan! i'm always happy to see your in-depth tutorials, and i waited for this :) about this Amazing model! i can't wait to test it :). Do you think will be (or maybe there is a way already) the possibility to Train this model with my (or other) face? i can only image the quality and the versatility of this model once trained.

DreamFilmVFX
Автор

Are you able to use CIVITAI's model's / Loras with flux?

Kebin_tan
Автор

It's pretty good in what it does, but kinda useless if we cannot train lora's or finetune the models.

koudkunstje
Автор

I'm currently using it with ComfyUI and it's great! Which sampler and scheduler do you recommend for best detail and sharpness? The default is "Euler" sampler with "Simple" scheduler, but I feel that it lacks a bit of detail.

bgtubber
Автор

Best Ai video channel is SECourses <3 Only channel is enough for all AI learning <3

hassanosama
Автор

Are you able to add TensorRT or Xformers extension for speed?

markschrader
Автор

hey I couldn't see in the video, what's the max vram e ram used, when running everything at fp8? I have 16gb ram and 12gb vram, tried to run on comfyui but I was getting OOM

Octo_Fractalis
Автор

Can I run flux on sagemaker with jupyter notebook!? I tried it but got some code error. I copied the google colab notebook. any help?

alangabilan
Автор

I can use flux with rtx 3060 12gb vram and 16 gb of ram ddr4?

creed
Автор

Too bad it doesn't allow commercial use.

rodrigop.