ComfyUI - FreeU: You NEED This! Upgrade any model, no additional time, training, or cost!

preview_player
Показать описание
I want to introduce a brand new node that was just added by Comfy to his stable diffusion system this morning, it's called FreeU. The concept here is you are able to change some of the underlying contribution mechanisms of the u-net, and this is the core of stable diffusion. The results tend to be much better, and it doesn't slow us down or cost any additional GPU load! #freeu #stablediffusion #comfy #comfyui #aiart

Interested in the finished graph and in supporting the channel? Come on over to the dark side! :-)
Рекомендации по теме
Комментарии
Автор

2:42 I found a copy paste trick - Ctrl C to copy a node then Ctrl+Shift+V to paste that node with all of its connection so you don't have to connect them again and again. You can also do this for multiple nodes too. Thank you for this useful video.

LeKhang
Автор

repost from Reddit:
I focused mostly on context so far. E.g. I was trying to get my model to drive a fantasy scene but as a photo realistic model it was struggling -- I was seeing concrete, glass, etc in the mid and background. To try to balance this out I:
- Increased weight of B1 which controls higher concepts like "fantasy", reducing S1 (i.e. B1 skip factor), increased S2 (i.e. B2 skip factor)
- My settings: B1:1.3, B2:1, S1.2, S2: 3.3 <-- I still need to upscale etc and don't know how damaging the early 3.3 will be to that process.
- My interpretation: I'm trying to tell B2 to step off (I felt it was taking B1's fantasy context and adding modern details to it. I'm assuming B2 gives way to B3, B4, etc etc to do their thing.

Inner vs Outer Blocks
- Outer Blocks: focus more on high level concepts of your image
- Inner Blocks: subsequently focus more on details

Definitions:
1. B1 = weight applied to Block 1 (outer most Unet block)
2. B2 = weight applied to Block 2 (next block after B1 but I think still considered an outer block relative to B3, 4 etc)
3. S1 = skip factor applied to B1 (logically higher skip factors will emphasize B2 more than B1 - so inverse relationship as someone noted below)
4. S2 = skip factor applied to B2

Some observations:
- Very low B1/concept values turns the image to some very abstract. I think SD starts to interpret concepts e.g. Human, Background, etc very loosely after which B2 details can't recover.
-

roktecha
Автор

Thanks for your great videos.
It seems the B values control the style and image quality.
With values around one for both I get very good results.
the S-values control the quality/number (e.g. number of legs of an animal) of the objects on the image, here I get the best results with values around and higher for S1 and with low values between 0-0.3 for S2. When the distance between S1 and S2 becomes smaller, more and more objects are generated.

murphylanga
Автор

Thank you, as always with the best tips 💜💜💜

edsonjr-dev
Автор

I like it a lot. Not sure what is does, but it does something and I love some variation

marjolein_pas
Автор

this gets super funky if you run some sd turbo lcm combo and have near realtime feedback.. super nice to see the generation change

TR-
Автор

Thank you very much for the great videos. Watching your videos, now I can build nodes to generate images and I switched from A1111. This node makes me feel like I'm using a different checkpoint. Fix value 0.9/0.2, S1/S2. B1/B2, 1.1/1.3 makes the image very colour saturated, extremetly detail acentuated (i.e. abs are more visible). Skins are more like painted. 3D checkpoint is like 2.8D one. lowering down to B1/B2 1.0/1.1 is still the same, but less acentuated. However, 0.9/0.9 makes images rather less sharp in my case (this probably is different for different checkpoints). I'm gonna try B1/B2 0.9/1.0 to see what's up. I'm generating portrait images only.

JpMidnight
Автор

With 1.5, I have good results with these - 0.9, 0.9, 0.9, 0.2
It adds some contrast (which makes the images visually sharper), and a tad of more saturation... but not too much.
With some seeds, there are more differences with and without FreeU.
It makes euler_a more "realistic" too in my opinion, less plastic

TransformXRED
Автор

I was wondering is there a way to easily generate tiling images in Comfy? I quite miss this feature after switching from A1111

tsutsen
Автор

Does this go before or after LORAs?

My first results: FreeU radically changed the style of my results from pastel and painterly to crisp and dramatic (not what I was looking for).

burghardvonkarger
Автор

By the way, why the width and height are 4096 on the clip text encoders?

Vestu
Автор

nice video! but im a bit confused because ive skipped a few videos. what video are you referencing that added comfymath node?

appolonius
Автор

Seems to be showing a blurrier background and a better detailed foreground with the settings you have here. However, it is also coming with a "happy kodak color" which is both good and bad. I think the background greenery is too green, though.

prattner
Автор

Is that math node only for XL or also 1.5? Been wanting something like that

DoozyyTV
Автор

Du you prefer the output from this node to using the refiner? Can the refiner be used in addition to FreeU?

alwilson
Автор

Seems to be greatly improving consistency of text and lines in generations, and makes results more coherent. Try it with text and vector designs, I am blown away (Backbone: 1.1, skip forwarding: 1.1)

CMakr
Автор

Using your settings on an SD1.5 Model and it improved the background, added some detail and the subject is looking at the viewer instead of mindless looking elsewhere.

jdsguam
Автор

Have you used SAG on A1111? I’m wondering if there’s a SAG implementation for ComfyUI. Also wondering what it would do if both were coupled together.

pedrogorilla
Автор

Really interesting node. I was looking for something like that.
Other and easy way: put only FreeU node between checkpoint model and KSampler model and you get something like A1111 extras but with more power. Perfect to keep the global composition and make little variations. *** best values removed because depends of...many things :) ***

AI.ImaGen
Автор

Hi Scott, I noticed my SDXL template workflow I created from your previous tutorial is different and trying to match. Problem is that I am not able to find the Batch Size node anywhere. I tried dragging the batch_size and it doesn't show up in the menu. Thanks

ysy