AuraFlow in ComfyUI - A First Look at this Truly Open Source Model!

preview_player
Показать описание
A Stable Diffusion image model with a truly open source license? Excellent! Count me in :) AuraFlow claims to be exactly that, and it's supported directly in ComfyUI... but what are the image generations like in this 0.1, beta test release?

Want to support the channel?

Patreon post for this video:

== More Stable Diffusion Stuff! ==
Рекомендации по теме
Комментарии
Автор

For a 0.1 version this model is great. It's still not fully trained and it's already better than SD3 in many regards while being truly open source.

Dave-rdsp
Автор

"Can it do hands?"
"Can it do muddy red Wellies?"
Yes. Yes, it can. Welp, passed all my highest priority tests.

DoorknobHead
Автор

Oh, Nerdy Rodent, he really makes my day; showing us AI, in a really British way.

juanjesusligero
Автор

I can actually confirm it even runs with 6GB VRAM but only very slowly. "Very slowly" as in "It takes 10 minutes or more to generate a single 1024x1024 image".

chaotichuman
Автор

It seems to be at least better than the bare sd1.5 model - and look what the community has made out of that. So a few tweeks and finetunes down the line and we have an interesting sd competetor. Keep us updated!

MrMsschwing
Автор

For what it's worth, It runs fine on my 4090 mobile with 16GB of VRAM, albeit a bit slow. I was even able to do a batch size of 4 at 832x1216.

richgates
Автор

Always good for different actors on the scene! A bit of competition is always nice, and it's only in beta so I guess that we will see more from them!

pon
Автор

Not A Bad First Impression.
Hopefully is good enough for the community to update their tools for it.
A competitor to SD3 is rather needed right now.

viddarkking
Автор

Comfy still looks way more complicated to me compared to A1111 so I haven't taken the plunge yet, but it still interesting to see new things via Comfy.

PS Nerdy, I like your short theme music at the end. Reminds me of early Stranglers. Would be appropriate if you actually have Rattus Norvegicus in your LP collection.🐀

buttersstotch
Автор

It's a nice showcase of what seems to be their early access version of the final model, right now it's pretty slow with the uni_pc (1.4s/it with a 3090 on a 1024x1024 image) and produces nice results but nothing ground breaking.

We also got like no guidance on how to efficiently use this thing and what different cfg and Schulders we can use, I'm very hopeful about the future of this model compared to what stability ai has been making! :)

balanse
Автор

tbh this looks like next big thing to me, cloneofsimo brought us LoRAs and this model can only get better given its license.

knoopx
Автор

Shows a lot of promise for an early beta, hope to see this come to InvokeAI soon!

sammcj
Автор

I have a feeling it doesn't have training data with labeled styles at all. Most likely bulk labeled data via neural vision.

MyAmazingUsername
Автор

It looks very promising. Unfortunately the 24 GB requirement is going to be a hard limit to how much it is used by the community.

pn
Автор

I am using a RTX 3060 with 12GB VRAM and it works fine, only a little slow: about 2 mins and 10 secs for an image in average.

Sebucan
Автор

I am using a RTX 4070ti with 16GB VRAM and it works great. about 33 secs each image

Cyberdjay
Автор

It obviously needs work, and ecosystem stables like controlnet, but open source providing a escape route away from SD’s enshittification is a massive W.

silvermushroom-gamifyevery
Автор

Amazing content as always! I'm hoping for some new model or flow that would allow the automation of 3d or 2d models game and animation assets

fdimb
Автор

thanks this was indeed very interesting 🙂

amortalbeing
Автор

what different does this model do? and how much iteration less it takes to crete something good?

VaibhavShewale