ComfyUI - Live Stream! Let's make some amazing art with Stable Diffusion!

preview_player
Показать описание
Let's make something together! Come along and let's talk ComfyUI as well as other things related to generative AI. It's always fun! Note that the playback of this video will be available to channel patrons after the show is over. Catch it live! #comfy #stablediffusion

Become a member to get exclusive access to perks!

Gigabyte 17X Laptop is doing the inference today! Grab one here:

Рекомендации по теме
Комментарии
Автор

Re: Grouping nodes - Hiya Scott! Just a quick tip: If you highlight the nodes you want to group together, then right click in an empty space and choose Add Selected Nodes To Group, it automatically sizes it to encompass the group. Might save ya some time (I also recommend double clicking the canvas & using the search to get the desired nodes rather than going through the long list like you do, but hey, you do you). Awesome info as always! Thanks much!
Additionally, Re: Inpainting w/ a mask - I haven't had a chance to tinker much myself, but I believe that if you soften up your mask image (maybe apply a gaussian blur, at least to the edges?) it will blend better & not leave you those harsh seams. Still learning here, but that might help ya in the future.
Finaly addition: OMG! Those butterfly dress designs are amazing! Quick, call a seamstress!

kpr
Автор

Thank you for leaving it up long enough for those of us who work anti-social hours to be able to watch in its full! It is appreciated beyond measure. Very interesting video (and informative as always)

GamingDaveUK
Автор

Hi Scott, I noticed about halfway through you were having some issues making the image pop from being prompt based, into being controlled by the controlnet, when you were setting the start of it to 0.3. That 0.3 lines up with the step count, in your case 20, it would switch to the controlnet at step 6. The problem is that the latent noise space doesn't resolve in a linear sense for most of the schedulers that I've seen. They seem to function pretty exponentially. So within the first 5 out of 20 steps, it probably resolves something like 60% of the image in latent space. This means the primary and some of the secondary forms of the scene are already pretty etched in stone and this is why you couldn't get it to pop from the one to the other. To pull that off, you'd need to have the transition happen earlier like step 2-4 or so, but again, it's dependent on how many steps you're using. That's also why there are the advanced versions of the samplers where you can have them return noise or not for when you do multiple sample passes(the prepass hack is kind nice where you do 1-2 steps of random noise and then feed that into another sampler but don't pass along the leftover noise with it).

Feel free to correct me if I'm wrong, but great video, I've been digging your content!

MPRX
Автор

A doubt : How to control the resolution impact of controlnet images (non square mostly) in the generation (1024*1024). It causes crop effect which usually crops the control image to a square. Tried converting the image to square by blur extension/using color, but that has effect on the maps

titan_dev
Автор

Just started watching your channel as I dig in to ComfyUI. I'm at about 55 minutes in to the video and am immediately wondering if there's a way to use an image channel (i.e. Alpha created in PS) as part of the process. It would have a similar effect as the depth map but if the options to blur, control black/white levels, invert, etc. are added this could potentially afford a great deal of control.

_carsonjones
Автор

Great stream! Congrats! And happy New Year everyone!

HexagonalColumbus
Автор

Really cool stuff scott, the end result got it on a very usable quality.

squirrelhallowino
Автор

how can i get this image of pink dress?

Designing-hcpz
Автор

One of the problems with the inpainting might have been because you used the vae inpainting node instead of the 'set latent noise mask' node. I find that that one works quite well. Also, for the midas depth map, you can use the node from the auxiliary preprocessors like you did for leres, instead of the midas one from WAS.

An interesting thing to try for changing the dress could be using the Unsampler node. Latent Vision covers it in his Infinite Variations video.

Darkwing
Автор

My problem with the inspire node for seeds is that it doesn't really give me random seeds - ever third or fourth image is just gonna be the same 🙈

sirmeon
Автор

Hi. great content. I paid for the membership. I think you said I can get worflow for the videos. Where I can get them? thank you

korilifs
Автор

Great stream! Just catching it now before "it" begins... so wishing all a Happy New Year!
This seams as good a place as any, to ask if possibly you've heard of anyone making a UI enhancement to turn on or off, nodes within suites. Like a checkbox before each node, so that you only see relevant nodes you want when searching, instead of the 2 you always want jumbled in with 12 other duplicate function nodes. It becomes almost unbearable and confusing sometimes to even intermediate users when you install the "must have" suites as you suggest.

TheDocPixel
Автор

I wonder if you wouldnt be able to create the exact same result - in much higher image quality and with much more control - in 90 minutes in Photoshop. Strangely thats the case with most "practical" uses of SD when you really need something specific, controlled and printable.

fust
Автор

Hello, my image loader saves all the old images i have loaded, how do i empty/reset it? thank you.

MustafaAAli-uvsx
Автор

Managed to watch 46 minutes in my 40 minute break, I am hoping that it will still be available after my shift. I realise you often reply to me when i say this, but i cant read that reply once its behind the paywall. Who knows come april i may start having enoug money that i can sub, right now I cant even use comfy unless I am in a cheap electricity period (the wind we had here in the uk last week was a god send)

Can you do a video on local LLM use? I didnt get that fair in your video and a stand alone tutorial on that would be handy, specially if the prompt from the ai can be added to the prompt we have asked for.

GamingDaveUK
Автор

Do you know of a Colab notebook for this?

richarddecosta
Автор

You do realize it’s the AI that is doing the chemistry so that the AI can get more chip power and get rid of humans 800 years sooner.

MultiMam
Автор

I'll set you up with a text to speech model if you want

Bakobiibizo