SDXL ComfyUI Stability Workflow - What I use internally at Stability for my AI Art

preview_player
Показать описание

We will start with a basic workflow and then complicate it with a refinement pass, but then we will add in another special twist I am sure you will enjoy. #stablediffusion #sdxl #comfyui

Grab the SDXL model from here (OFFICIAL): (bonus LoRA also here)

The refiner is also available here (OFFICIAL):

Additional VAE (only needed if you plan to not use the built-in version)
Рекомендации по теме
Комментарии
Автор

I would love if you could go over some of those settings in advanced detail - like "oh, I fiddle with more conditioning steps when I want to X", etc. There are so many superstitious people out there giving bunk advice that your level-headed breakdown would be super valuable!

TedWillingham
Автор

A million thanks for these. As finicky and frustrating as the program is for beginners, your calm expertise is just what's needed.

archielundy
Автор

I've grown to understand and enjoy comfy UI more that the one i was using before thanks to your videos.I really appreciate you and the effort you put into making these tutorials. One of these days you can show us how to train sdxl 1 or it's lora with our faces . Thanks :)

me.shackvfx
Автор

Why are there width and height values in the CLIPTextEncoderSDXL and what is the difference between width and target_width and why is one of them 4096?

Pfaeff
Автор

I was waiting for it. These are very difficult for ordinary people to figure out how to use it. Thank you for the video!

rsunghun
Автор

it blew my mind that you can load an entire workflow from the image! thanks for the great content.

dxnxz
Автор

this video has some great insights on how to process the original image. i have a few fyis to add. for those of us stuck with low vram rigs that have to run SD 1.5 (fir now 😢), the verbose negative prompt is essential -- for SDXL it is worthless like Scott says. for those with Mac, this web interface uses ctl just like windows. if you like png over jpg and don't want to share metadata, open the image in an editing app (photoshop, gimp, etc.), export as png, and make sure the include metadata is not selected. Thanks for all the great content Scott -- you never disappoint 👍.

johnmorrison
Автор

Excellent tutorial, thanks! I got SDXL up and running with the refiner. If you have the time I'd like to see you make a video explaining how Stable Diffusion works and explain exactly what the program is doing as it sends the data through the nodes in Comfy so I can have a greater conceptual understanding of what is happening. Believe me I could watch hours of technical stuff lol.

angryDAnerd
Автор

Prompt switching can be realized with additional KSampler that will render first steps with completely different prompt. For example you may want to create triangle composition, or a symmetrical image, and it can be done at early steps of a generation. Good for abstract art. And also I like that in ComfyUI it's seed can be fixed while base model and refiner will be generating on different seeds

CMakr
Автор

Love your disgust for the negative prompts lists haha. relatable stuff

MTHMN
Автор

I'm new to ComfyUI all and really love your videos. Thanks! Maybe this is obvious to folks, but one thing I recently learned was the ability to condition after one KSampler ran so you can continue to refine your final image. It ended up being an alternative (or another tool in the toolbelt) to inpainting. I wasn't just refining, I was adding to or dramatically changing the final image - all without losing the "base" starting point that was all "locked down" in that the seed was fixed, the cfg and steps didn't change, etc. So it was a very non-destructive compositional workflow. If I wanted to add an object to the image, I could do that through a second prompt that was applied to a second KSampler.

I could also introduce new LoRAs later on in those steps. I'm going to continue to experiment with this strategy and go through this more than once. So instead of a long prompt followed by a smaller corrective one, do more of a build up of prompts. Start simple and continue to add on to it so that elements within the image can be independently adjusted, removed, or re-arranged. Again, a more compositional approach during image generation to hopefully reduce the amount of work in post (or a series of very similar images that can be worked together in post processing). This could get a bit messy too, but maybe not if they are arranged left to right in a linear fashion building up the scene.

TomMaiaroto
Автор

Huh. I wonder what would happen if you had dedicated models for a variety of tasks (hands, eyes, hair, reflections, contrast, and so on) and fed a few steps from each of them in a daisy chain until you got to the first "true" sampler...

Truly the possibilities are endless; thanks for the food for thought and the hard work!

novantha
Автор

I have been in love with ComfyUI since I found it (coming from Unreal Blueprints, very familiar system). I am currently working out some torch issues with my current system, but I generate whenever I can. It is great to see you building out the workflow and explaining the nodes that you use and why. Very informative and THANKS for the tip with the shift-click to copy nodes AND connections. NICE!

Yggdrasil
Автор

Thank you for this! I've created my own custom workflow based on this one with lots of inputs --> primitives to change stuff quickly.

ColbstaD
Автор

My first steps into ComfyUI, and it's the kind of thing I really like 🙂

PieterLaroy
Автор

I'm mind blown! never thought of using comfyui ever but seems like I'm sold over this video. very nice sir and thank you for sharing your knowledge

Feelix
Автор

Thanks!!! these boxes are actually starting to make sense

ImAlecPonce
Автор

thank you so much, i've become really proficient with A1111 and moving to comfyui was a big switch, so your help with how the workflows work in comfyui has made it just as easy as using A111 for me.

iiiCorrosiveiii
Автор

WOW ! that's a super tutorial of ComfyUI there ! Thanks. I never know that there was this new addition of clipnode for SDXL !
The only drawback that I find in ComfyUI is the way it manage the workflows. I mean when you want to change your original workflow, you need to save a local file, and if you want to do something else (like inpainting) you have to redo ALL your workflow and save it to a file to recall your workflow and switch by loading one workflow or another depending on what you want to do. Definitly not fond of this way of managing workflows. They could have done some kind of "favorite" workflow. Like 5 or more "workflow ready" that you could custom afterwards and save your "favorite custom workflow" and switch whenever you like. it would skyrocket the use and adoption of comfyui !

hleet
Автор

Thank you so much! Even though I couldn’t understand much, it helped me get started with comfy.

imperfectmammal