Stable Diffusion 3 - How to use it today! Easy Guide for ComfyUI

preview_player
Показать описание
How to run Stable Diffusion 3. This video shows you to use SD3 in ComfyUI. I also do a Stable Diffusion 3 comparison to Midjourney and SDXL

#### Links from the Video ####

#### Join and Support me ####

00:00 Intro
00:19 SD 3 vs MJ vs SDXL
12:00 Stable Diffusion 3 in Comfyui
Рекомендации по теме
Комментарии
Автор

The API and the credits systems should have been stated at the very beginning of the video and a mention of them in the title was needed as well, even only a vague one.

sneedtube
Автор

I honestly can't wait for the Full Open Source release.

And for the Fine Tunes and Loras to start flooding in.

viddarkking
Автор

Nope. If I can't download and run it locally, with no accounts required, then not interested. Not doing this credit system game.

BryceKant
Автор

Thanks for comparison! To state the obvious, the real game begins when we would be able to drop ipadapters, controlnets, loras on top of the model. However, prompt following is already a promising hint.

AlistairKarim
Автор

And 6 fingers on the hand of the SD3 image at 1:48.

CCoburn
Автор

Any idea about the system requirements to use it, once is deployed open source?

Make_a_Splash
Автор

Is this the end of the era of free Stable Diffusion?

andriiB_UA
Автор

Exciting but not there yet need the open source to have fun, but it's a great party trick at the moment

nikolesfrances
Автор

It's wonderful! Will there be a local version available for training? If so, when?

AGvfx
Автор

Great video Olivio. The API key thing is a no go for me. I wonder what Emad thinks about this. The images I've seen don't look much better than Cascade. I'm really excited to see how it does with video though.

skycladsquirrel
Автор

Just for clarification, for sd3, you can't generate images using comfyui without buying api credits?

maddydon
Автор

Hi Olivio, great video. Can you record a video showing how to train a Latent Diffusion Model? It would be helpful.

claudeclaude
Автор

Hi there is there a way of booking a call with you for consultation?

yanus_ai
Автор

what if i want use my own gpu for it ? how can i install it?

BillRoid
Автор

I feel that mid journey nailed the images. If SD3 is running on credits, and charges you per image, this will create frustration with bad image generations. Think about testing prompts, seeds, LORA, ip adaptors, it’s going to cost and feel annoying when it makes mistakes if you have to pay for each roll.

FightClubGarmz
Автор

I wasn't too impressed with the output of SD3 just yet. As many already stated, the power will come with control and added community training. But I hope prompt-following will be groundbreaking with this one.

dataminearpg
Автор

Recently, there are several new models, such as Playground V2.5, SD Cascade, and PixArt Sigma, which I think provide huge improvements in terms of prompts adherence and image quality as compared to SDXL. Since SD 3 is not accessible by public any time soon, it would be great to hear your opinions on these new models (a head-to-head comparison would be awesome).

zhongxianshi
Автор

a comparison with also SDXL base would have been nice. Now it's a bit like comparing raw ingredients (SD3) with a fully cooked dish ( finetuned SDXL ). But still interesting video !

HolidayAtHome
Автор

@oliver I get soooo confused. Is SD3 simply a model checkpoint like all the other ones? It sounds like the actual underlying technology (diffusion calculations, adaptations, etc) would be what’s called “stable diffusion”.

I’d love to hear commentary on this in the future, just to help clarify if we’re dealing with foundational technology updates or with incremental checkpoint upgrades.

lllllllll
Автор

the prompt "Style" is different for all the three model, IMO you can try test the prompt to get similar images for each model, instead of just copy paste the prompt and hopping to get same images.

MrSongib