Real-Time Text to Image Generation With Stable Diffusion XL Turbo

preview_player
Показать описание
Installing Comfy-UI and configuring to use SDXL Turbo model for real-time text to image generation.

○○○ LINKS ○○○

○○○ SHOP ○○○

○○○ SUPPORT ○○○

○○○ SOCIAL ○○○

0:00 intro
0:20 about this video
0:52 SDXL Turbo Model
1:12 Comfy-UI vs Automatic 1111
2:17 installing Comfy-UI
3:57 How To Use Comfy-UI
6:26 Configure Comfy-UI Dashboard for SDXL Turbo
9:00 Running Real-Time Text to Image Generation
12:08 Conclusion

○○○ Send Me Stuff ○○○
Don Hui
PO BOX 765
Farmingville, NY 11738

○○○ Music ○○○
From Epidemic Sounds

DISCLAIMER: This video and description contains affiliate links, which means that if you click on one of the product links, I’ll receive a small commission.
#ai #stablediffusion #sdxl
Рекомендации по теме
Комментарии
Автор

Dude, your videos have been awesome over the years. Glad you will put out content like this just because it's something you're passionate about. Keep doing what you're doing! So many cool projects I'd love to do if i had the time. I hope you can hit 500k Subscribers in 2024.

Kajukota
Автор

So cool!! I’d love to see more AI videos 🤓

was_a
Автор

Hi, just letting you know I really enjoy your videos. Particularly since I'm working from a Ubuntu box. Your explanations are clear and concise, which is very helpful. Thanks for all you do.

lfcatchall
Автор

Also, you can use the extra inpaint extension to edit (in real time). ;)

MrGFYne
Автор

Would love to see more like this. I'm wondering is it possible to do this on google colab too for those of us that don't have a modern nvidia gpu? I've been playing with invoke ai and fooocus but don't know if those can support real-time generation.

greatwolf.
Автор

It's so frustrating that I downloaded the exact same models and did the same settings and steps, but the actual image generated looks like a dump compared to your results.

Nico-hpqu
Автор

More videos please... I understand quicker when you explain considering English is not my first language, , thanks <3

RazoBeckett.
Автор

Now this was a fun thing to play along with after fighting with nextcloud and localAI... Imagine having this kind of prompt within nextcloud, where you can type up the image and get the result morphing nicely :D. I did have the same issues as other described and you responded to; noise_seed set to 0, control_after_generate fixed, cfg to 1, and I had fun results with sampler_name lcm. It works nicely on a 3090 :D.

ewookiis
Автор

hmm, is there a docker container for this?, for simple installation and a more friendlier ui. what sort of hardware does it need

tld
Автор

Great work! Yes, your AI videos are good because you know enough about the technology and enjoy using it.
Hopefully the other people clickbait AI content will lessen; then people can see your straightforward and detailed content.

mhavock
Автор

Hi @novaspirit - could you do a video on the setup of Ollama/AutoGen Studio/liteLLM on ubuntu please? I am starting to get interested in agents, but running into issues setting this configuration up on my ubuntu box. Love your videos.

bidombd
Автор

Hey Don, what OS do you use in this build? Great video, more AI will be great to see.

MrPDC-jryl
Автор

For us who have a weak PC. Can we rent a server to run a powerful emulated machine? Or the cost would bw prohibiting

keylanoslokj
Автор

@Don - you had an older (professional) gpu that you bought (don't remember what type) in an older video about AI. How does that stack up against the 3080 for AI?

Smithereens
Автор

use the lcm-turbo lora in combination w/ sdxl-turbo, it will make it 100x's faster

MrGFYne
Автор

I'm one of the determined that love your AI vids! How about a separate channel foe AI topics @novaspiritai 👍

RocktCityTim
Автор

I get the low quality noisy generation despite using a 4070. Do you know what might be causing this? When i let it generate a blank image, i get the same one as you do.

rgrjllf
Автор

I tired a while ago, my first mistake was using windows !

jamescrook
Автор

there is a way of using this with directML? it would make this even more available!

nullnill
Автор

What vram does your 1070 have and how much system ram do you need?

robdavis