Testing different GPU's with Stable Diffusion | RTX 3060 12 GB vs other GPU's

preview_player
Показать описание
OK Let's prove it whether or not the RTX 3060 12 gb is good at Stable Diffusion tasks.

📺 Previous videos:

Monitor your VRAM while you run A.I. Stuff (my VRAM monitoring tool)

Kohya bits and bytes errors | Kohya v22.6.2 vs New version

A better A.I. GPU for the money | RTX 3060 12gb

new RTX 4000 series super cards | A.I. GPU's

LoRA training settings tested and explained | Stable Diffusion | Kohya | Automatic1111

🔗 links

LoRA training batch converter on my github page

be quiet! Pure Wings 2 80mm pc fan

⌚ Timestamps
00:00 - intro
01:02 - testing methodology
07:33 - SD 1.5 generating in A111
11:22 - SD 1.5 generating in Forge
13:27 - SDXL generating in Forge
16:25 - SD 1.5 LoRA training in Kohya
19:25 - SDXL LoRA training in Kohya
21:49 - summary and conclusions

‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐
💲 My patreon:

🍵 Buy me a coffee:
‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐
Check out my music channel!
‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐
😆💗 Social Links:
‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐

Below is the gear I use.

►Mics:
►Tripods:

►Webcams:

►Gimbal:

►Hardware:

►Lighting:

weeylite S03 Pocket RGB video light

Full disclosure: As an Amazon Associate I earn from qualifying purchases.
This means that if you make a purchase, it doesn't cost you any more money, but I will earn a commission.

#ai #stablediffusion
Рекомендации по теме
Комментарии
Автор

Really appreciate the effort you put into this video 👍

SFzip
Автор

Wait for the 5090 ;)
Love your work, keep it up! The Nitty gritty and tinkering is tops.

ruiztv
Автор

Would be interested to see how the 4060 TI (16gb) stacked up with the lot.

JD-jdeener
Автор

Loving this kinda video dude!! keep'em coming dude!! Love the JaysTwoCents as well.

Do you know anythting about.. i dont know.. off the top of my head.. My 4080 Super running out of Cuda memory eorror thingy??

Also wanna tell you about FluxGym, trains a Flux lora in about an hour (depnding on N. of images + repeats + epochs and such)

haydnrayturner
Автор

recently found some flux.1d models quantized to 3-bit, have been able to run that consistently on my 3070
tho if i will upgrade anytime soon and i have similar goals as to now, i will definitely get a gpu with way more vram

honichi
Автор

As someone with a 3070 and plenty of time using various generation AI: all of the a1111-adjacent stuff has a great UI but terrible opti - you can do 16 times the simultaneous tasking with 1/16th the vram on comfyui - it just takes collegiate level of RTFM and I hate it. also doing --medvram or --lowvram on the 3070/3080, it works just shy of a 3080ti for gen on a1111 - donno bout forge tho

barvin
Автор

Bro the processing time is embedded in the meta data of the image file. LOL

manofmystery
Автор

I hear rumors in the past couple of months that SD has seen some performance gains on RDNA 2 and 3, yet nobody has made anything official. Would you be interested in the results on my rx 6800xt if i get it up and running in the next couple of weeks? Though to be fair we can already anticipate the 3060 being on par while cheaper and plug and play due to CUDA (Rocm really is viable only for people like me on linux, and even then for the best performance you wouldn't use 1111 witch is what people default to).

Technerd
Автор

hey can you help me cause my gpu fan is too noisy

Gird-iw
Автор

Would not buy a 4090 now, the prices are inflated and 5090 coming soon with 32Gb

xcom