LORA for Stable Diffusion - A1111 Dreambooth Extension - 6GB VRAM!

preview_player
Показать описание
Ever wanted to have a go at training on your own GPU, but you've only got 6GB of VRAM? Well, LORA Dreambooth for Stable Diffusion may just be the thing for you! Faster! Smaller! More Better!

This complete, easy to follow guide will get you training in no time - even on low end hardware.

Not suitable for children.

:)

* Links! *
Рекомендации по теме
Комментарии
Автор

6 GB settings seems to be possible on my GTX 1660 (which doesn't support fp16 at reasonable speed). xformers doesn't work without fp16 though, so I had to switch to a different option for that one.

Also, "cache latents" seems to be completely broken now (gives an error about "too many values to unpack"), so I switched to "don't cache latents"

MrCheeze
Автор

This is probably the best Dreambooth tutorial I've seen. Thanks Nerdy

"blurry cartoon of an old an old male walrus with moustache singing opera inside a pixel-art cavern"
LOL. You're avin' a laugh!

magneticanimalism
Автор

I have an 8GB gpu, this is great news. Can't wait to take this for a spin!

j_shelby_damnwird
Автор

FINALLY a decent LORA tutorial. Thank you!

alecubudulecu
Автор

Not directed at Nerdy Rodent, but people wondering about the Instance Token being "olis". If you plan training on an anime model, it's worth noting that "olis" might not be an ideal token. You can always test it out first to see if a batch of images is consistent or completely random with your model. If there's consistency, then there's most likely an association with that token.

It might not make a _huge_ deal, but sometimes a little can help!

Avenger
Автор

I finally got Lora working thanks to you ! Spent the whole day on this, it was driving me crazy.

jonathaningram
Автор

Thanks to the author of the video! All works on gtx 1070 8gb graphics card no problems with fp16 and xformers. He wrote that it uses 5.6gb memory as I remember, but it's not exact.

АнатолийЕвгеньевич-щт
Автор

I don't know how I've been messing around in the world of stable diffusion without your precious help. Great video. Lots of detailed information on the subject. You are big, man, and I love your multiple and ultra realistic avatars. Nice.

akratlapidus
Автор

Can confirm using dreambooth/lora on my gtx 980 ti 6gb works, but with a few adjustments needed. Due to xformers apparently not working or giving worse results for 900 series cards, I just don't use it. I had to say no to fp16 and use flash_attention instead of xformers. Steps are 150 (probably could've gone higher to 300) and batch size is 2. Note, it still takes a long time depending on how many images you have, but you can still get very good results from a low amount of images cropped at 512.

Dannyk
Автор

Thank you prof. Rodent! This is perfect!

MikkoHaavisto
Автор

Genial muchas gracias!!! 😃Saludos desde Bolivia

DJHUNTERELDEBASTADOR
Автор

Fantastic to see this video going to watch it in full later when i have time to be able to take it in.
Thank you for making this guide!

GamingDaveUK
Автор

Out of curiosity, with the renders that you used, how heavy did you go on negative prompts as well?

Exaltar
Автор

Woop woop, been waiting for a guide on this! Thanks

Seany
Автор

Nice! Notification gang! To anyone interested, you can create panoramic image with Stable Diffusion, try: "skydiving view over chicago, (((monoscopic 360 vr))), ((hdri))"

banzai
Автор

my webui looks different, it's not in dark mode and when I click Dreambooth
there's no "LORA" option

DarkFactory
Автор

I'm having this problem when training model with Dreambooth
'Exception training model: 'No executable batch size found, reached zero.'.
Can you help please?

borannb
Автор

Here it is, the Lora clip!
Awesome 👍

KnutNukem
Автор

Haha, I knew this video was coming. I wonder if you had the spare VRAM you might see a faster checkpoint reached. doubling the batch size to get this down to closer to 3/4min

sharkeys
Автор

ty. no one still explained how to use prior preservation with classification dataset correctly. I heard thats the best accurate way to have a winning training, i would love to have a tutorial for that, no one is giving it away

p_p