8-Step Turbo-Powered Flux Image Editing & Inpainting in ComfyUI!

preview_player
Показать описание
Three new things this week with the release of two models from AliMamma along with some Rectified Flow Inversion (Unsampling from RF Inversion)!

Still using more than 8 steps and waiting minutes for your results? Slap the power of turbo in for extra fast image generation using Flux AND it works with RF inversion for amazing image edits! What are you waiting for? Not Turbo, that’s for sure 😉

Quickly and easily style any image. Turn anime images into realistic ones, or realistic images into cartoons… or 3d models, or wood, or… RF inversion is super versatile!

Nb. With recent ComfyUI updates, centre no longer centres and instead has become fit to view, but hey 🫤

Want to help support the channel?

== Beginners Guides! ==

== More Flux.1 ==
Рекомендации по теме
Комментарии
Автор

I love this YouTube channel. every time I can't wait for new videos to come out. I literally hang on this guy's every word to try all the new stuff.

p_p
Автор

The rectified flow unsampling blew my mind. Thank you very much for sharing your videos and nerdy wisdom! :D

juanjesusligero
Автор

Thank you! You’re a gentleman and a scholar!

SouthbayCreations
Автор

5:28 The bug is in the rectified sampler nodes python script. In short the random generator needs to initialize the random seed before its use every time not the one time it's currently doing.

Main
Автор

Oh, Nerdy Rodent, 🐭🎵
he really makes my day, ☀😊
showing us AI, 💻🤖
in a really British way. ☕🎶

juanjesusligero
Автор

How does this way of inpainting compare to just inpaint with differential diffusion? Is it better and or faster? Thank you.

DanielPartzsch
Автор

10:30~I looked at Nerdy Rodent's workflow, but it's hard to see the network in a spaghetti state, why don't you use an Anything Everywhere node or something to clean up the network ?🤔

岩田邦夫-vk
Автор

Why is there no workflow for the Flux Turbo Unsampler in the description?

radekuralmosa
Автор

Would that Inpainting work for e commerce stuff as well? Like backgrounds of product photos for example. Normally I used Photoshops ai for this

fbnxo
Автор

So if I understand correctly, this unsampling workflow can turn cartoon images into realistic and vice versa?

bgtubber
Автор

this is awesome... so it's possible to integrate controlnet depth with rf-inversion to transfer style?

spiritform
Автор

"I always enjoy and learn a lot from your wonderful lectures. In the lecture, I'm a bit confused about where the conditioning should be connected in the RF Unsampler Prompt. Could you please clarify?"

moviecartoonworld
Автор

Will there be a guide to setting this up in thier own workflows instead of a one size sort of fits all workflow?

DaveTheAIMad
Автор

Two things that would make using image generation models and ComfyUI more "creative" user type friendly :

1. Standard naming conventions for files of the various models, model types, checkpoints, loras, lycoris, whatever else they call them, and what loader to use in the name; a gguf, nf4, Flux 1, hyper, turbo, lightning, Pony, SD 1.5, SDXL 1.0, SD3, whatever else the call them, and if they're multi model like Flux + SDXL

2. The ability to direct the flow of your work flow. Rather than pushing a rock over a cliff and watching the destruction, a switch or button just like a railroad uses to direct a train to its desired stops along a route. Then I could try different settings without rewiring a CRAY switchboard.

I know neither will happen but while I admire the work others have accomplished I'll tinker around connecting random things to random files and I might eventually get lucky.

marshallodom
Автор

Hey brother I have low end pc like rtx 3050 4gb vram laptop and ryzen 4800h so I think my pc can't run this locally so can you plss tell me how can I use rectified flow inversion online for free plss

devilgamingyt
Автор

Wow! I think Flux unsampling is my new addiction! XD

MrSporf
Автор

This is truly amazing! You've got yourself a new patron! I have two questions, will this inpainting workflow work if I bring in an image as a converted mask? right now I am receiving an error that the alimama controlnet is missing the image and mask. Appreciate these videos, I have been following along since the first animatediff setup!

kaelrose
Автор

I just subscribed! Great content btw do you have a way i can setup same workflow you're using in the video? 😊

MettameowChannel