Using Stable Diffusion (In 5 Minutes!!)

preview_player
Показать описание
Here's the fastest way to instantly start using Stable Diffusion online with ZERO set-up!!
Stable Diffusion Official Website:
Link To Online Un-Cencored Stable Diffusion:

AI Generated Art - (Commissions OPEN) Contact:
My Art-Station Store Link:

If you enjoyed this video, please consider joining the Support Squad by clicking "Join" next to the like button, or help protect the channel on Patreon, it really helps out and I truly appreciate the support -

Blender RIGGING & ANIMATION SERIES:
Intro To Unity Programming FULL Series:

I also have a steam game that combines Starfox with Ikaruga gameplay!
It took over 3 years to create :)

As always, thank you so much for watching, please have a fantastic day, and see you around!

- Royal Skies -
#stablediffusion #aiart #art
---
Рекомендации по теме
Комментарии
Автор

Official Stable Diffusion Website Link Here:
There are other sites you can use to access Stable Diffusion for free -
Aitrepreneur's has a great vid about them here:

My only gripe with the official website is that it does have a NSFW filter on it, which seems to have been added recently :(
The only online free website I've found that you can use with ZERO censorship is this one:
But, it's extremely slow, so if you guys know any better/faster online versions of SD that don't have filters, definitely let me know down below!!

TheRoyalSkies
Автор

Websites that default to dark mode show immediately they're superior. That's a fact.

da_roachdogjr
Автор

In my thousands of seeded experiments locally I have found out that:
*_Samplers_* : (this changes the technique in how it makes the picture 6 are basically the same, the 2 ancestral are different)
*LMS* and *Heun* are the most consistent results averagely
*Euler* and *DDIM* are a better quality version of it, sometimes with more consistent shapes
*Dpm2* and *PLMS* are slight variations again from that but not as good in my opinion
*Dpm2A* and *EulerA* are what is labelled as the ancestral versions, but they are usually a different composition than the other samplers but similar style.
These last 2 I will test between them to sometimes see what is the better more different result. Personally Dpm2a is better more consistently I think.
Since I am testing locally for speed I test with LMS, then test others for quality, but overall I use DDIM and Dpm2A
There is also in the automatic1111 version a DPM fast and DPM adaptive, Adaptive is good, Fast is a lot more contrasted, but at about half scale it is similar to the others.

*_Steps_* : (this is what contributes to the time it takes to render, higher is longer)
*30* is a relatively consistent low quality fast render
*60* is medium slower quality
*90* is high quality and is the slowest, but above 90 really does not change a whole lot.
Below 30 is not consistent of shapes and is more random
Below 15 is just artistic noise mostly but sometimes can be good if that is what you are going for.
Going extremely high like 250 usually is not much different than 90
when you get into 500 or 1000 range it will sometimes change the picture to some even newer layout
beyond that can do some stuff, but is not worth the render time. 90 is the best overall that I use for quality and consistency
Technically the lowest consistent mark is *80* it varies by picture, sometimes 75 works, but still have to test more to narrow that, somewhere between 75-80 is the consistency.

*_Scales_* : (this does not add anything to the time rendered)
*6* is a good range for very random artistic, lower definitely gives you a lot more variety but less detailed shapes
*13* is a bit more consistent
*20* is a lot better consistency
However on my custom stable diffusion install I actually go up to:
*30* for even more consistent, this is what I use most of the time
*45* is kind of a good very consistent
*60* is more of an extreme contrast consistent but works decently too
going beyond 60 like 90 or more gets weird and too literal so is more stark contrast and noisy
I have gone up to 100 or 250 in this, but really does not help, 60 is about the max I will go maybe, 80 if I am curious
Technically the lowest consistent mark is *25* below that gets more random/artistic, above that gets more contrasted/simplified

*_Dimensions_* : (at least for stable diffusion 1.4)
if you go above 704 or max maybe 896 you start losing coherence and get duplicates because the model was trained on 512x512 images.
so if you want portrait *512x704* or landscape as opposite that.
however if you want closer to a 16:9 ratio then do a landscape of *896x512*
Sometimes you can get a great render at high 1024 resolutions, but most of the time it will have duplicates unless you have good negative prompts (which most online websites do not have)
I should add that on my custom setup I can do about 1024x1792 and get pretty decent with a large set of negative prompts, but you will always have duplicates.
There is a highres fix option in the automatic1111 version which basically just renders at half resolution and then upsacles so will keep consistency and not have duplicates.

I have been testing a lot with them using X/Y plots in the *_automatic1111_* stable diffusion with waifu diffusion model set to get more figures in simple prompts. But these same stuff applies to a lot, it really does vary based on what you are trying to make. Hope some of that info helps and is useful in your tests to get better results.

Cheers!

*_TLDR_* :
Dimension usually with duplicates but is 16:9ish: 896x512
Dimension usually without duplicates and better: 704x512
Steps with consistent pictures: 80 (below 60 loses some shapes, below 30 is more random, above 90 really does not change much)
Scales with consistent pictures: 25 (below that is more random/artistic, above that is more contrasted/simplified, higher sometimes reduces duplicates at cost of details and contrast)

for quickly testing prompts and iterating ideas I use:
Steps: 30
Scales: 30
Sampler: LMS (could do DDIM or Euler instead since at the lower steps they are about the same timing but LMS again is the more consistent picture, but can be quite noisy)

for rendering the higher quality version of that I use:
Steps: 90
Scales: 30
Sampler: DDIM (most of the time I just use DDIM here, but may check out Euler or Heun as well or Dpm2A if I want to see some alternate interpretation)

WebUI: automatic1111 (which is from the rentry org voldy link that is mentioned in another comment. I have a modified ui config json to tweak the range of parameters)
Model: Waifu Diffusion 1.2 (currently until I can test stable diffusion 1.5 or the next waifu diffusion with a larger training set)

Cigam_HFden
Автор

You should definitely make a tutorial setup on having a self-hosted stable diffusion kit. I mean, it's not like you condensed years of experience of 3D animation and programming into a playlist of videos that is easily accessible and in plain English for everyone to learn and improve upon 😶‍🌫

Larry-Jiao
Автор

now that they added image editor, this is top tier. also thanks I was confused on most of these things

DrFeho
Автор

Okay but for those who do have custom PCs, can you make the more complicated tutorial?

Jojo
Автор

I don't know anything about image AI, but apparently a diffusion AI breaks down images into language components and then grabs pixels (Picture Elements) and texels (Texture Elements) as a palette to create the resulting image. The sampler techniques affect what pixels and texels are taken (sampled) and the steps (sample size) from the selected images, so the AI may decide to rearrange the output image accordingly. Techniques that don't change the image much regardless of seed and sample size are called 'stable', while others are unstable. So the sampler is a denoising filter that recombines selected image pixels and texels into 'learned' forms based on the text associated with the input sentence. However, Stable Diffusion doesn't store the images directly, they're stored as weighted values in its neural network, but that's another topic.

Here is a video showing the same sentence that should result in the same image using the same sample method, but increasing the sample step for each output image, from 1 to 500. So the only change, between images, is the number of samples.

petergostelow
Автор

These videos have given me a new boost of energy to try generating images. Keep up the great work. Wil there be a video like this for the local version of SD? A few extra options and tabs and I'm just mashing buttons :D

DarknessRifter
Автор

You can see the differences between the samplers using a fixed seed, depending of the image style you want, some samples are better than others, in general I got better looking results using Euler Ancestral

jnjairo
Автор

The best way to see how the samplers affect the results is to use the same prompt and SEED for each generation. two of the samplers will actually give visually different results than the rest, the other are just slight differences in tone, shading, and detail. "Euler A", and 'DPM2 a" are the ones that give verry different compositions.

LumberingTroll
Автор

k_euler a can get good results from like 20-30 setps, however it changes a lot between steps, adding more doesn't help
DDIM is great for 10 step sample images, just pump out stuff to see how your prompt is with the DDIM express
LMS is the reliable one that you want to go with 50+ in terms of steps
K_DPM 2 is a slow one that can get higher quality results at like 80 steps, but it's SLOW to generate those steps

ctothorp
Автор

Wow this is pretty intensive. All great things to know.

kenhiguchi
Автор

Bro just saw ur channel u helped me learn blender 2years ago thank alot

boluwatifeagbede
Автор

Imma keep it 100% real witchu, chief...
I thought I was tripping balls the entire time I was watchin.

marakevans
Автор

With Stable Diffusion being open, there are ways to bypass the censorship and the credit system. You just have to go outside of dream studio, which requires access to a fairly powerful GPU and some knowledge and is one SD model version behind.

Автор

thanks for this, the results are pretty interesting better then what Craion has been giving me

Tallacus
Автор

i did this but its using my CPU instead of my GPU to generate images. i dont know how to fix that.

doctirdaddy
Автор

I see some accounts having designation as "danger" on stable diffusion. What does that mean?

easygamingwwiigamingchanne
Автор

Noice, I was waiting for u to actually mention stable diffusion!

bozo
Автор

does the credits has limits or something?

therealdokutah