Use Your Face in AI Images - Self-Hosted Stable Diffusion Tutorial

preview_player
Показать описание

How do you make your own AI Generated Images? And how do you train Stable Diffusion to use your face? Today, I'm going to show you how to install and run your own AI Image Generation Server, and teach it who you are.

But first... What am I drinking???

From Barsidious Brewing, it's the BLACK Stout Ale. For an 8% stout, this hits WAY above it class in body and flavor. Highly recommended.

Support me on Patreon and get access to my exclusive Discord server. Chat with myself and the other hosts on Talking Heads all week long.

Music:
No Good Layabout by Kevin MacLeod
Рекомендации по теме
Комментарии
Автор

I spent over 2 hours on other videos, so confused. This video was simple, to the point, and got me started on my AI goals. Thanks!

jonahrothenberger
Автор

To view GPU utilization during AI computation under Winows, you need to switch the graph from 3D to CUDA. Otherwise it may look like the GPU is doing nothing :)

janliberda
Автор

Please train the model on Charlie. We need AI generated cat pictures… “Charlie and Rambo fighting a dragon in the forest”

jdl
Автор

This is an awesome demo and guide. Thanks very much! I will be playing around with this over the weekend. Hopefully I can figure out how to merge different training models to fine tune the results I'm looking for.

xero
Автор

Thanks for the video. This was much easier than I assumed it would have been. This will be a lot of fun to play with in conjunction with an online DnD session I recently started with friends.

dustinphillips
Автор

Pumping tons of images isn't usually the best route, Sampling steps is generally better for 25-30 and CFG Scale of 8-9, usually Restore faces also makes things look a lot better. Adding Negative Prompts is also very useful and you can click the little recycling icon for Seed to see how certain tokens affect the outcome to further fine tune the outcome.

VanHonkerton
Автор

Thanks, this was right on time I just installed and needed this tutorial

bamnjphoto
Автор

Jeff thanks for another informative and entertaining video!!

bdhaliwal
Автор

I set mine up months ago and still haven’t used all the features yet! It’s a neat thing and I can’t wait to see where it goes. Edit: How did you get it to use both GPUs? From all that I read while setting mine up you couldn't use 2 at the same time?

Lft
Автор

Thanks for the video! I'm looking for cheap GPUs for LLVM to fine-tune Llama2. For that, lot of RAM (>16GB for small model) in the GPU is required and this is the way I arrived to your channel, looking for M40.

My main concern is regarding computation power: I have seen test with 3090 and 4090. Have you tested any large model to see if the cores are able to deal with this new NN models?

Thanks in advance ;-)

(Great channel by the way, funny things that I have added to my watch list)

codigoBinario
Автор

So if I wanted to use an image or a basic drawing directly as an input in addition with a prompt just like the "diffuse the rest" variation, how would I go about this? I have found that being able to pick options and re-input options with a text prompt I can actively change this prompt as needed is a pretty effective way to vastly increase the quality of the images. This also has the benefit of the fact I don't need to pre-train the model nearly as much, if at all in some cases. Also being a human filter is kind of cool because you get to learn how those algorithms work and engineer some amazing stuff using this algorithm knowledge.

Maybe someday someone can build an algorithm to utilize dynamic prompt switching in an intelligent way to make amazing and original pieces of art which are unique and beautiful.

_shadow_
Автор

You have to play around with CFG when it's your own face. Also more steps (30 or +) with Euler does help!

wagmi
Автор

hey man, thanks for the vid. I love it. I have a quick question pls.

I followed your steps, and the model just creates identical images to the ones I uploaded. I've managed to get some good results with img to img, but txt to image just replicates images I used to create the person model.

I tried merging it with some other checkpoints but then the face loses it's character, and finding a balance of an accurate face with another model, at all different ratios, hasn't proven successful.

Do you have any advice/tips?

fractanimal
Автор

you can bulk resize images in windows using the offical PowerToy

alpenfoxvideo
Автор

Thanks Jeff this is exactly what I needed! Is there a way to get better looking results, though? I've been spoiled by MidJourney and anything less looks plain bad. Openjourney maybe?

dragodin
Автор

How do we get the dark ui sd? When I stalled mine it was white.

Reggieincontrol
Автор

Now would this work for achieving a particular style? Such as a photographer training portraits of various people to model the colorgrade and lighting style?

OBERHighCommand
Автор

Probably need libraries of each concept you wish to merge yourself with. Aka star trek library and specific characters you want to look like or the background you wish it to link with. More just face pics from actual angles, yours were a little flat facing. The more libraries the more you can mesh... But I haven't done it before. Let me know if this comes out true.

swyftty
Автор

I have error when click on Train to start process images: Exception training model: ''NoneType' object is not subscriptable'.
What to do?

sergeykudryashov
Автор

I don't get it. I do exactly that but in the end if I enter my prompt literally nothing happens. It's as if the model didn't change at all. I'm wondering if there is an option that I enable that doesn't work.

jonathaningram