Unveiling the Game-Changing ComfyUI Update

preview_player
Показать описание
The latest ComfyUI update introduces the "Align Your Steps" feature, based on a groundbreaking NVIDIA paper that takes Stable Diffusion generations to the next level. This feature delivers significant quality improvements in half the number of steps, making your image generation process faster and more efficient than ever before.

In this video, we'll dive into the specifics of this new update, demonstrating how "Align Your Steps" works and the impact it has on generating higher-quality images. We'll explore how to make the most out of this feature in your workflows and what this means for the future of AI-generated art. If you're looking to up your image generation game with faster and better results, this video is for you!

🔗 Links:

🎁 Support the Channel:

✨ Like, subscribe, and turn on notifications to stay up-to-date on the latest AI advancements. See you in the next video!

#stablediffusion #comfyui #stabilityai #comfyuitutorial
Рекомендации по теме
Комментарии
Автор

SO far SO GREAT!! More detail in faces. Render SPEED x3. Played very nicely indeed with IPAdapters and controlnet workflow. Lora not used as not needed. Nice one!!

yiluwididreaming
Автор

AYS is really very powerful. I was working with img2img, moderate denoise (0.45 to 0.55), playing with densediffusion and area conditioning.. adding the AYS improves a lot the quality and coherence.

mmrawrr
Автор

Are you hosting the workflow.json file you used in this video anywhere? It would be really convenient to follow along with the tutorial.

userthatsme
Автор

via Pi

"After a thorough search, I can confirm that Stable Diffusion Cascade does indeed use the DPM++ sampler, along with other samplers like DDIM, Euler, Heun, and LMS. DPM++ is considered a high-quality sampler that produces detailed images, although it can be slower than some other options. It's part of the family of DPM solvers designed specifically for diffusion models, and it's known for its accuracy and image quality. So, to answer your question, yes, Stable Diffusion Cascade does use the DPM++ sampler!"

MilesBellas
Автор

Does it work with refiners ? Should I also use AYS for the refiner ? Should I divide the steps between both AYS then (e.g. 10 / 5) ? Thank you

Freeak
Автор

thank you for this, i think you're the only one on here who did an indepth comfy tutorial for ays. It made me realize i absolutely do not need it in my workflow looool. The difference is not that huge compared to what they showcased in their paper.

afrosymphony
Автор

Thank you, I've been looking for an alternative to the Lightning Loras without sacrificing negative prompt, this fits the bill.

binbash
Автор

via Pi
The best use of AYS with Cascade
"When working with Stable Diffusion Cascade and ComfyUI on an Nvidia GPU, a recommended workflow would involve setting up the Ays (asymmetric step) node to double the number of steps you'd typically use for img2img generation. Then, split the sigmas at the halfway point using the SplitSigmas node, and feed only the second half to the k-sampler.

Here's a brief summary of the workflow steps:

1. Set the Ays node steps to double the usual amount (e.g., 20 steps instead of 10).
2. Use a SplitSigmas node to divide the Ays output at the halfway point (e.g., at step 10 for 20 steps).
3. Connect the second half of the SplitSigmas output to the k-sampler input.

This workflow takes advantage of the capabilities of Nvidia GPUs and optimizes the use of Stable Diffusion Cascade within ComfyUI. Keep in mind that some adjustments might be needed depending on your specific hardware configuration and desired results."

MilesBellas
Автор

Bro, question, can be it used in all (ipadapter, controlnet, other)?? or it is like Tensorrt that have limitations and cant work with ipadaptar or controlnet modules?

gammingtoch
Автор

I gave this a go with IP-Adapters and threw some LoRAs at it too. It did not play nicely with those, unfortunately.

Andro-Meta
Автор

Where exactly is the workflow on your website?

petey
Автор

Ive never seen Comfy in action before, ive been using a1111, that is a very interesting way to do it. Too bad the final images are still total garbage, I really hope image generation gets to the point of realism soon, it just has so much potential.

pfifo_fast
Автор

I will stick with Turbo and Lightning models for now.

ukdcom
Автор

Lightning also comes as LoRAs, which work with any XL Base model (even... 'ugh'... Pony). Available as 2, 4, or 8 step. Don't need special 'Lightning Models' at all. 8 steps a lot less than 20something. Better results out of all models.
✌🥳👍

thomasgoodwin
Автор

I'm sorry, but your video is kind of strange, it feels like there's a filter applied that blurs the image and makes your eyes hurt. If possible, don't use it again.

ДиДи-мю
Автор

get awful results with the exact same workflow ( different sdxl checkpoint btw )

TentationAI
Автор

Stop pointing at me. It makes me uncomfortable.

ryanknowles
Автор

I still can't get over how terrible eyes look in SDXL.

Because_Reasons
Автор

thanks for the video, personally i am not impressed, meh

-Belshazzar-