SECRET FREE Stable Diffusion 2.1 TRICK! As Good As MIDJOURNEY!

preview_player
Показать описание
When the Stable Diffusion 2.0 & 2.1 models came out, it was a mixed bag that divided the community, it was very good at creating very realistic images but if you wanted to apply an artist’s style, you wouldn’t get similar results that you had with the 1.5… UNTIL NOW! Because very quickly the community realized that with the newest models, textual inversion embeddings worked way better than with the previous models and that this would be a very easy way to add a specific style to your images generated with the 2.x models, and indeed the results are mindblowing! We are at the same level of quality that you would expect from something like Midjourney BUT for free! So in this video, I will show you how and where you can download those textual inversion embeddings, how to use them, how to train them yourself with the automatic1111 repository and finally I will give you my personal pick of the best textual inversion embeddings that I enjoyed using the past few days!

Did you manage to make them work? Let me know in the comments!
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
SOCIAL MEDIA LINKS!
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬

CIVITAI:

Midjourney embed:

VikingPunk embed:

Papercut embed:

Anthro embed:

Remix embed:

CGI Animation embed:

Knollingcase embed:

Vray render embed:

Special thanks to Royal Emperor:
- DanO..

Thank you so much for your support on Patreon! You are truly a glory to behold! Your generosity is immense, and it means the world to me. Thank you for helping me keep the lights on and the content flowing. Thank you very much!

#stablediffusion #textualinversion #stablediffusiontutorial
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
WATCH MY MOST POPULAR VIDEOS:
RECOMMENDED WATCHING - My "Stable Diffusion" Playlist:

RECOMMENDED WATCHING - My "Tutorial" Playlist:

Disclosure: Bear in mind that some of the links in this post are affiliate links and if you go through them to make a purchase I will earn a commission. Keep in mind that I link these companies and their products because of their quality and not because of the commission I receive from your purchases. The decision is yours, and whether or not you decide to buy something is completely up to you.
Рекомендации по теме
Комментарии
Автор

HELLO HUMANS! Thank you for watching & do NOT forget to LIKE and SUBSCRIBE For More Ai Updates. Thx <3
"K" - Your Ai Overlord

Aitrepreneur
Автор

Awesome video, thanks for both summarizing an overview AND going into more detail in the 2nd half. Looking forward to an in-depth follow up!

geoatherton
Автор

I was about to start playing around with textual inversion today. Thanks for making my life so much easier lmao

FinnieTheGhoul
Автор

The sweet thing about embeddings is that they also work on the depth and inpainting models, so you can do really nice img2img with these trained styles. For dreambooth you'd have to train all 4 models separately.

sharperguy
Автор

My goal with the midjourney embedding was exactly this. To show the community that the 2.x model is actually amazing if you prompt it correctly or even better use embeddings. (I view embeddings as a super complicated prompt). So I'm really glad this is catching on now.

Something to note with training is that I used all the default settings except for resolution. I've tried tips similar to what you suggested, like gradient accumulation or higher vector sizes than 1, but the results didn't turn out that good I think. However I think that it could probably be a lot better with more data and tweaking the training parameters.

Some things I've noticed with embeddings in general is that you tend to get the best results when your prompts are simple. The longer your prompt is the less effective the embedding is (despite increasing the prompt weight:2).

CapsAdmin
Автор

you can also: 1 preface your first term medium/style/artist word (in your case "photo") with "epic-" ie epic-photo, epic-cgi, or my fav epic-cgi-ink. 2. add as a descriptor. 3. add artgerm, artstation (both because neither seems to have its full 1.5 impact but both together is still pretty awesome) to your descriptors. 4. use artgerm-artstation as if it were a medium/style/artist as the first term ie "artgerm-artstation of waifu [descriptors]"

mordokai
Автор

Already tested everything! Once again thank you very much for your wonderful experience and for sharing it to us all!!

danieljfdez
Автор

The 'Midjourney look' is a particular style that is not always desired and gets old - I prefer a clean palette for style.

tonytitani
Автор

Make no mistake, this concept works amazingly well with 1.x as well, and I haven't tested yet, but there's no reason I know of this wouldn't also work in combination with hypernetworks and aesthetic gradients, for a level of control and customizability that is totally beyond anything we've been playing with.

In 2.x we have the new OpenCLIP model that StabilityAI spent millions training. It is apparently a bit harder to prompt, but much better at following the prompt.

Emad himself has said several times recently that prompts are basically a temporary stand-in and embeddings are the future, as in, replacing dream booth and fine tuning in most cases.

ArielTavori
Автор

I am glad you unlocked this training gem for 2.1.

DJVARAO
Автор

Hey K, love the videos - I admire your vocal skills, very entertaining. I did not know about that filter. Damn, your on point.

friendofai
Автор

This video was super mind expanding and exactly what I was looking for - can't wait for the next video!

chrisjohnston
Автор

Can you use textual inversion embeddings with Lora models? I.E. use trigger words from both to create an image? Thanks!

lilshadow
Автор

Hey Aitrepreneur, i have a question.. at 3:40 of video, how can you change the percentage? do you do this with the mouse wheel or does it go through grammaly? would interest me, would make my workflow faster. thank you for the answer 😁✌️

TheAiConqueror
Автор

How do you do that thing of 3:39?

Thanks for the tutorial!!!

joel
Автор

Nice video, how convenient for me to only see this video after purchasing midjourney pro plan

whoopeewinks
Автор

Actually many of us use fast-stable-diffusion Colab to test things from time to time, could you do a tutorial on how to use these models there?

mreduar
Автор

Remember: After adding that embeding, you cannot use word 'midjourney' with any other model than 2.1. If you have 'midjourney' in your promt with 1.5 or any other you will get error. That is a mistake made by the creator who should have not used word ''midjourney' to trigger that style. Just imagine what happens with all other embeddings named ''midjourney'

digidope
Автор

I'd be interested to see you do a video on this new LORA addition to dreambooth. Looks like it could be interesting, but hearing about it from someone who goes much deeper into this stuff would be nice.

tungstentaco
Автор

Thank you very much for this video! It is very helpfull!
I think that embeddings is the next big step in image generation

Aleksandrsvideo