Stable Cascade Comfy Checkpoint Update

preview_player
Показать описание

note: the previous releases are still working, but use a more complex CLIP/UNET/VAE method, the new one is just two checkpoints, which are placed inside the /models/checkpoints/ folder, alongside other checkpoints.

We have Four new Workflows for ComfyUI.
txt2img, img2img, img2vision and img2remix

- Workflow Packs:

- SDXL Lora's

- Introducing series (music/video)

- Checkpoint Merging

- cosXL / cosXL-edit conversion

- 3D Generation

- New Diffusion Models (April '24)
Stable Cascade:
SDXS-512:
cosXL & cosXL-edit:

- Stable Cascade series:

- Image Model Training

- Music with Audacity

- DJZ custom nodes (aspectsize node)

stable diffusion cascade
stable diffusion lora training
comfyui nodes explained
comfyui video generation
comfyui tutorial 2024
best comfyui workflows
comfyui image to image
comfyui checkpoints
civitai stable diffusion tutorial
Рекомендации по теме
Комментарии
Автор

I would love to look at that multi-workflow. Great stuff!

garrettdonahue
Автор

Really cool! Is nice you being chasing the newer versions, I appreciated that! It seams the compression value has a relation to the quality and resolution, seams 64 are good for really large imagens, otherwise the 42 seams better to 1024 images

WanerRodrigues
Автор

very much appreciated. Works nicely w/ new comfy models.. very impressive actuallty on 4gb vram and 20? steps total? wow. game changer i hope

sprinteroptions
Автор

If there is anything I've learned, it's that patience pays off and it's not always an advantage to be an early adopter!

The new comfyui checkpoints are very efficient memory-wise.
The largest amount of VRAM used when loading and running the workflow, was 8.4GB for the models only not counting any other system overhead in VRAM(other apps such as YT etc).
Comfy used 16GB of system RAM to cache the models and other data.
So overall these changes are much more efficient.
Those with 8GB cards should be able to run these workflows but they'll get a bit of "swapping" where Comfy will need to access system RAM for some of the model data but it shouldn't crash.
Not sure what will happen with much smaller VRAM cards like 6GB which might bog down with memory transfers between the GPU and system RAM.

glenyoung
Автор

Great work. Been looking at an extension for Forge, but it takes 5 minutes an image, versus 20 seconds in comfyUI, and the comfy images are way bigger.

gunsinger
Автор

Great stuff, but I'm having issues getting Reactor node installed, Comfi just doesn't seem to like it...

MarkNelson-zolr
Автор

you're the best!please add a Controlnet node

djivanoff
Автор

I am on amd and always have memory problems with models higher then sd1.5 (recently solved it by using lowvram always with sdxl) and cascade seemed like it worked without any problems at all with the previous 3 file method. Now that I see the two models are big -one of them 9 gb- no way I could run them successfully without massive slowdowns etc. Do we have to switch to these ? Are there any advantages of using these over 3 seperate files ?

HasanAslan
Автор

I loaded the checkpoints correctly but I get this following error "Error occurred when executing CLIPTextEncode: 'NoneType' object has no attribute 'tokenize'"? My gpu is a 2070 Super. Not sure whats causing this error?

DonNardi
Автор

I noticed the file sizes of the checkpoints are lower than the Unet models. Are the checkpoints less quality than the previous unet models?

JalenJohnson
Автор

эта ошибка на всех 4 рабочих поцессах выдает

yklandares