A look at the NEW ComfyUI Samplers & Schedulers!

preview_player
Показать описание
A whole bunch of updates went into ComfyUI recently, and with them we get a selection of new samplers such as EulerCFG++ and DEIS, as well as the new GITS scheduler. See them all in action, then try it yourself at home!

Want to support the channel?

== Learn More Stuff! ==
Рекомендации по теме
Комментарии
Автор

Great resource however I think the Samplers would have been best tested with real faces as everyone could get a better gauge as to how well the samplers worked on images with proportions that everyone is familiar with.

scottownbey
Автор

idky, but the algorithm has been hating on you lately... i haven't been recommended one of your videos in "a rat's age". that's like "a dog's age" but nerdier xP

mordokai
Автор

Interesting. Which samplers are better? I'm a bit of a newbie. I've been using DPM++ 2M Karras and Euler/Euler Ancestral in ComfyUI with the regular KSampler node. Should I switch to using these new samplers and schedulers and what would the benefits be? Any speed or quality improvements? I couldn't understand from the video. I'm mostly doing img2img stuff with controlnets and IP-adapters rather than generating stuff from scratch. Would these benefit this use case?

bgtubber
Автор

Another exquisite and very needy content to enjoy!! 😊

swannschilling
Автор

after last comfy update I cant find the extra schedulers, all I found is a new one called beta, how can I get yours? should I install some custom nodes?

makadi
Автор

Thanks for the new video. as asual gold af.
EDIT: rofl nice outro

ArtfulStory
Автор

I generally use AYS but for unsampling, gits can get away for 2x6 steps (6 unsampling, 6 sampling) at 1080*1920 without any diffusion lora and the image is nearly identical to a 20-30 step with a regular scheduler or 10 steps ays

Ethan_Fel
Автор

Thanks for the great detailed video - but I think we are getting overwhelmed with choice that doesn't now seem to offer a substantial 'reward'. In other words, not much difference between them really for all that extra time fiddling about with different settings :) I can see the use cases in video generation for extra speed - but far less so for static image generation.

Artp
Автор

Thank you. I have a question. Is it possible to add HighRes-Fix Script to the Custom Sampler? I know that you can connect a second KSampler and then HighRes-Fix Script. But I'd like to be able to do it directly.

michail_
Автор

Nerdy a question I'm using Comfy through Stability Matrix, I keep getting import errors an example is Dreamtalk, now the question is is the import error due to using Matrix or is it the software?

Avalon
Автор

How do you get your noodle colors so nice?

superlucky
Автор

What for video generation? Nothing faster than LCM for now aint it?

andrejlopuchov
Автор

Should maybe add a link to what you show at 0:20, which is a link on the 2nd link you have in the description?

FusionDeveloper
Автор

Hoping when you said it went up to a 11 you were giving a nod to Spinal Tap.

lokitsar
Автор

A Samplers & Schedulers Xmas!!!! Noice 🙌👌

flisbonwlove
Автор

You're probably sick of answering this Nerdy, but what is your Linux setup? Would be cool to see a tutorial.

OskarBravo
Автор

Nice, but at the same time I'm scared because there are just new options to explore by trials and errors. So let's spend another half of a year generating images with different parameters then another half of a year for previewing them and picking. I'm afraid that some concept is lost in this nightmare. We needed a tool to make images quickly based on what we think. AI text interpretation and image generation was supposed to do that. But observing all the communication, tools, videos and discussions about that, I see some countless amount of hours spent worldwide on trying to deal with it's weaknesses. Of course AI/ML direction is desirable, but I believe the future is moving it into 3D domain because this is reflecting our world, doing some deep integration of AI and 3D engine, physics, collision detection etc. Instead of spending hundreds of hours on trying to fix AI artifact, maybe it's better to spend it on literally manual creation of some part of 3D model for Blender and then let AI to do arrangement of models in the scene etc. Combine 3D "thinking" and 2D like we have now in AI generators for backgrounds and textures. "Woman on grass" example of SD3 medium. If there is just customizable and parametrizable 3D model to be posed and placed in the surrounding environment by AI including rather rules of physics than analyzing billions of parameters on 2D images that are indirectly trying to reflect mapping from 3D world to 2D image, then I believe we could avoid "body horrors" and many other artifacts.

mptest