Stable-Diffusion: Kohya Simple-Captioning (FAST!)

preview_player
Показать описание
Here's the easy way to auto-caption your character's dataset with SD-Kohya-!!
Big Shout-Outs to Le-Fourbe for walking me through the process!!
If you enjoyed this video, please consider becoming a Member :)
Or joining the Patreon Squad directly:
If you prefer to join with Kofi instead, you can find me here!
It makes a huge difference, and really helps ensure I'm able to make the best videos I possibly can Free for everyone here on YouTube -
If you'd like to join the Community Discord, we'd love to have you here at:
Custom Character AI-Training Tutorial Series Link:
---
#stablediffusion #aiart #ai
---
If you like my Demo Character, you can find her on my ArtStation Store here!
If You're specifically an Unreal-5 Developer, then you can find my Unreal Marketplace Here:
If you're interested in Learning Unreal-5 FAST - You can follow everything I've learned so far in my "UE5 Speed Tutorial Playlist" here:
---
Art From Thumbnail can be found on Reference Hub's ArtStation Here:
Рекомендации по теме
Комментарии
Автор

EDIT: I have learned a bit more since uploading the video, and have actually found out that it's better to have a variety of different backgrounds, so if you have no green-screen then it's really not a big deal and probably OK -

Next video, I'll show you how to do "manual" captioning, since I think it's good to be able to do both -
Quick note, at 3:00 - I said "The simpler your caption, the more flexible the training can be" - But, what I should have said was "The simpler you keep it, the more *easy the training will be" - We'll talk more about this in a later video, but if you need any help, be sure to check out our discord under "AI-Questions" below!

TheRoyalSkies
Автор

>Subscribed for blender tutorials
@
>learning stable diffusion
🤣

RinKin
Автор

The only useful tutorials bc of my short attention spans are your videos.

Unknown-osnb
Автор

I’d say that it goes through the images in that folder 20 times per epoch.

terjeoseberg
Автор

Recommend rather do SDXL, I would suggest ponyxl v6. Its pretty great, the coherence to prompts are light years above anime sd1.5 models, you'll need bigger resolution datasets. But thats honestly a good thing when comparing 512 to 1024

kernsanders
Автор

keywords : don't think too much about name, make a word that is short :
Yra, Fxy, Syw, ahra, >>>> (the longer the word the more tokens (=space in prompt) it takes and therefore the more weight they will have compared to other words)
the keyword should be trained next to the class of subject that you train like :
a woman Yra, OR Yra, woman, etc.... /// the first word would get transformed first.
THEN you input the background description as we want to separate it.
idealy we would want to have a different background every time so the word green doesn't get affected but it would take too much time and results have shown to be sufficient on my end.

usually, we train a close description of the whole character so prior knowledge get transfered to the new data. but it is a little more tricky to prompt after that.

lefourbe
Автор

Not sure if I missed something, where do we get the training images?

softsmolflower
Автор

i want to train a stable diffusion model like with 1800 pictures but its very slow how can i solve it ("20_modelfolder" epochs 10 ), it gives me 23-90k steps and its very slow

enescelik
Автор

What hapenned to blender bro...

AI is theft, we wanna make our own stuff

Mente_Fugaz