Optimize SDXL on ComfyUI: Unleash Full Power with FP16 VAE & Launch Args

preview_player
Показать описание
Discover how to supercharge your Generative Adversarial Networks (GANs) with this in-depth tutorial. We delve into optimizing the Stable Diffusion XL model using ComfyUI, focusing on techniques that enhance speed and performance on NVIDIA GPUs like the RTX3080. Learn how to leverage the FP16 VAE for faster decoding and utilize launch arguments such as --normalvram for efficient VRAM usage. We also guide you through using the Offset Lora model for improved image quality. Perfect for both beginners and experienced users, this tutorial provides practical tips to help you get the most out of Stable Diffusion XL. Start your journey to mastering GANs today!

Links in the video:

Launch arguments:

Necessary for GPU's below 16GB

--fp16-vae

Optional

--highvram (good for big GPU's)
--lowvram (slow but good for low-end gpu's)
--normalvram (all round pretty decent, you only use this if something isn't functioning correctly despite having decent hardware)

(not recommended but if you're still having trouble these options can potentially work, though both slow as heck for different reasons)
--novram (uses RAM instead, but not CPU)
--gpu-only (Uses no RAM only VRAM, slow on certain parts of process, fast on others. May work well on beast GPUs, still not recommended.)
Рекомендации по теме
Комментарии
Автор

Links in the video:




Launch arguments:

Necessary for GPU's below 16GB

--fp16-vae

Optional

--highvram (good for big GPU's)
--lowvram (slow but good for low-end gpu's)
--normalvram (all round pretty decent, you only use this if something isn't functioning correctly despite having decent hardware)

(not recommended but if you're still having trouble these options can potentially work, though both slow as heck for different reasons)
--novram (uses RAM instead, but not CPU)
--gpu-only (Uses no RAM only VRAM, slow on certain parts of process, fast on others. May work well on beast GPUs, still not recommended.)

tripleheadedmonkey
Автор

Thanks for the info! Have a great day ^__^

muuuuuud
Автор

help, the comfyui is on lowvram, i have 4080s

orionconstellation
Автор

I tried adding --fp16-vae --normalvram to the command line.
it looks to be generating much faster but all i get is black images on other workflows.
If i remove --fp16-vae i get normal images again.
I have an RTX 3070 gpu whith 8 gig vram.
so if it aint broken don't fix it, or is there an other solution??

FunnyMan
Автор

--f16-vae not recognized as valid command in 1111 took wont run with f16 in vae follder but works fine with other vae on hf page

JLITZ
Автор

no FP16 fixed VAE for the refiner model ?

HanSolocambo
Автор

What if i want a different lora than the offset one?

MultiOmega
Автор

My rtx 3070 takes like 3 mins to generate an 512 px image....

ArchBlend.
Автор

maybe redo this with a non-4k screen resolution? Your screen is shit quality on YT.

rensiknosaj