HYPERNETWORK: Train Stable Diffusion With Your Own Images For FREE!

preview_player
Показать описание
HYPERNETWORK is a new way to train Stable Diffusion with your images and the best part is: it's free! If you can run it of course, since you need at least 8GB of Vram to be able to run it. This neat technology allows you to insert any character or style you want and have Stable Diffusion generate brand-new images. So in this video, I will show you how you can use Hypernetwork locally on your own PC, I will tell you a few tips and tricks to get the best results and I will answer the question if it's better than Dreambooth or not!

Did you manage to make it work? Let me know in the comments!
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
SOCIAL MEDIA LINKS!
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬

#stablediffusion #hypernetwork #stablediffusiontutorial
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
►► My PC & Favorite Gear:
Recording Gear:
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
WATCH MY MOST POPULAR VIDEOS:
RECOMMENDED WATCHING - My "Stable Diffusion" Playlist:

RECOMMENDED WATCHING - My "Tutorial" Playlist:

Disclosure: Bear in mind that some of the links in this post are affiliate links and if you go through them to make a purchase I will earn a commission. Keep in mind that I link these companies and their products because of their quality and not because of the commission I receive from your purchases. The decision is yours, and whether or not you decide to buy something is completely up to you.
Рекомендации по теме
Комментарии
Автор

I've started saving at 50 steps, since it allows you to see more precisely when things start to go bad. Also, if you don't want a precise style, you can have a more general prompt or leave it out entirely. The most important thing is to train until things go bad, then take the best image and train on that at a lower learning rate. Presumably (haven't tried it yet) you can take it even further with a third iteration.

cdeford
Автор

Not sure if anyone has mentioned it but if you put a checkmark in the setting ""Move VAE and CLIP to RAM when training if possible" under training settings it prevents the issue you are having with overtraining.

RobertVaughan
Автор

Honestly I think you get pretty good results if you're just wanting realistic faces. Don't even need that much refining either. I normally first train to about 1500 steps at 0.00005, then a few more thousand steps at and it seems pretty good for most cases. Also far more useful to have embeddings instead of dozens of different ckpts

bladechild
Автор

Wow this is big. Thank you so much for making this so clear.

EricFullerton
Автор

Exactly the video I needed after updating to the hypernetwork update last night, thanks!

kernsanders
Автор

Hey, great informative videos and i've been watching you for a month or so now but why don't you ever credit Automatic1111 for his work on the WebUI as im fairly sure thats what you're using when you say "Super Stable Diffusion"? Apologies in advance if I'm missing something.

robd
Автор

It seems that 3090 TI can train 2000 steps at .00005 in 10 minutes which is nice
Definitely going to use this way of creating embeddings instead of the standalone repo now!

xdeathknightx
Автор

Would absolutely love to see you make some Stable Diffusion news videos and cool things happening and stuff going on. it's hard to get the info because it seems so spread out, so many things being developed with this tech. if you have time that is.

Thanks for your hard work!

Severance
Автор

The sculpting is the best annalogy ever!

TheRoninteam
Автор

I've been waiting for this. Thanks for all your tut vids :)

jaybenton
Автор

You are so helpful! Thank you for taking the time to make these. You are the only person I've found on YT who is walking through like this.

dallassnyder
Автор

It is true that it takes longer, but it is free. And I had planned to create many models of different characters. Thanks for the info! 🔥🔥

harmondez
Автор

god what a great time to be alive, this stuff is so cool, thank you for the video

BenjaminK
Автор

Tried it this morning, but wasn't satisfied with the results...
That's because I didn't think of the second part of the process.
I'm trying this right now, I'm sure this will be better.
Thank you 👍

lucablight
Автор

Very nice! Great Content as always! 💯👍

ARTificialDreams
Автор

Thanks for the vid, how do you then integrate the hypernetwork in the prompt to generate txt2img with the trained model ? Is it done via selecting the hypernetwork in the settings page ? Does the prompt text include reference to the name of the hypernetwork ?

jean-christophepaulau
Автор

How do you invoke the character after training? Do you have to select right hypernetwork in settings and that's when in txt2img you can use it? Do you invoke it with full name with sufffix (like agentX-900 ) or is the name enough?

darnoq
Автор

Thank you very much for tutorials. Here's a quick tip for you: In Photoshop for Windows, hold down ALT + Right click + Drag Horizontally. If you drag left, the brush size decreases while dragging right increases the size.

lazersondesign
Автор

Right on! Thanks for the vid and I do agree. The Runpod method for me has created some great results.

Theexplorographer
Автор

Hi all, very nice, simple and helpfull video!
Unfortunately the flied "Preview promt" min 5:32 isn't shown in my web ui.
Has anyone an idea how to fix this?
Your Steve <3

Steve-sz