New Flux IMG2IMG Trick, More Upscaling & Prompt Ideas In ComfyUI

preview_player
Показать описание
Fancy a "discount controlnet"? Maybe you'd like to know more about various upscaling options, or perhaps some interesting prompt ideas would be more your thing? Well, they're all here in an extra long video that took far too long to test everything, but at least your upscales can now be banding free :)

Links:

Pre-made workflows used in the video (and others) available from -

== Beginners Guides! ==

== More Flux.1 ==

Contents
0:00 Introduction & ComfyUI Flux Prerequisites
1:11 Flux IMG2IMG trick
5:20 Flux Upscale testing
18:52 The Best Vision LLM + Prompt Ideas!
Рекомендации по теме
Комментарии
Автор

Great list of upscaling methods, I have also tried tiled diffusion and interpolating the upscaled latent with the Unsampler, these two were the best for me. Tiled diffusion is like ultimate sd upscaler but without any seams problem even at high denoise (0.7), while interpolation is complex and I don't really get it but it's the process that gave me all the best generations with Flux yet.

PaoloCaracciolo
Автор

Thank you as always for your amazing videos.

moviecartoonworld
Автор

Amazing content! Keep up the great work!

LIMBICNATIONARTIST
Автор

Enjoyed the video! 🐁🐭
Actually - I use Flux Img2Img thusly: Denoise always stays between 0.1 to 0.18. - The base_shift always at 0.5 - max_shift can vary between 2 to 5 even. = amount of change.
This is how the output can both get the color influence, and the LORA can add itself to the original since the max_shift effectively acts like denoise without being denoise. Makes sensei?
Thought that was the trick.. Cheers!

electrolab
Автор

I can only afford control net with 2 limbs but I use the "mirror" option in MSPaint to make a fully formed character. Appreciate you helping us solve this maze

wakegary
Автор

img2img is pretty easy with flux. I prefer fluxunchained with the flux sampler parameters from Essentials, paired with Florence and a promptgen model. Drop denoise to 0.80, and you will get an image with the same basic composition, drop it to 0.40, and you're getting very, very similar. 24 steps with a Q4 model, around 11Gb VRAM for a 1024x1024 takes around 45 seconds on a 3090. There's also Q5 and Q8 variants of the model.

weirdscix
Автор

Thanks for the video. Could you consider showing the node graphs in less compact but in more readable format? It is pretty much impossible to quickly read the flow of any workflow when shown with this kind of layout. I understand that the purpose may be to make them fit on screen.

devnull_
Автор

Thanks so much! Question - the Denoise Node you have up there. That ends in the Float output. What Custom Node group is that with? I can't seem to find it.

ToddDouglas
Автор

From my experience with Flux and SDUpscale I think a denoising strength of 0.3 - 0.35 is the best choice. It still adds some details, but in 95% of cases no funny stuff is happening to the image.

equilibrium
Автор

Custom sampler I see i see. XLABS one is kinda shitty ngl.
Also thier IPadapter is either underdeveloped or heavily censored compared do sdxl.

I will try your method now with i2i.
Also how you made prompts everywhere working? For me it snaps to negative, and positive is missing.

aeit
Автор

I miss the speaking avatars. Great video again though

stereotyp
Автор

cant stand flows rather do the same with code!

JNET_Reloaded
Автор

I didn't see anything good in this flux i have a simple upscaler does better than any of any upscaler that I've been testing.

romanioamd