Exploring Image-to-Image with Flux 0.1 Schnell: A Deep Dive into the Latest Update in ComfyUI!

preview_player
Показать описание
Discover the new image-to-image capabilities of Flux 0.1 Schnell in ComfyUI.

If you found this video helpful, please give it a like and subscribe to the channel so you don't miss out on future tutorials. You can also support the channel by joining my Patreon or becoming a free member.

[SUPPORT THE CHANNEL]

[RESOURCES]

[SOCIAL MEDIA]

[BUSINESS INQUIRIES]
For professional inquiries and collaborations, please contact me via email:
(Use this email for business-related matters only)

[LAST TEN VIDEOS]

[TIMESTAMPS]
00:00 Introduction
00:30 What is Flux?
00:54 Workflow
03:09 Dev model
04:08 Conclusion
04:15 Time taken
04:47 Outro

Thank you for watching!

[TAGS]
comfyui, Code Crafters Corner, CodeCraftersCorner, Image Editing, Creative Generation, AI Image Processing, Image Stylization, Workflow Automation, Comfy UI, Stable Diffusion,
Flux 0.1 Schnell, image to image, ComfyUI, AI model update, tech tutorial, AI art, digital art, image processing, AI workflow, AI technology, denoising factor, VAE encode, anime style transformation, Camenduru, FP8, text prompt, advanced sampler, tech review, AI updates, deep learning, neural networks, digital creativity, system requirements.

[HASHTAGS]
#StableDiffusion #ComfyUI #CodeCraftersCorner #ImageEditing #CreativeGeneration #aiprocessing #ImageStylization #WorkflowAutomation #TechReview #YouTubeTech #StableDiffusion #FluxUpdate #ImageToImage #ComfyUI #AIArt #TechUpdate
Рекомендации по теме
Комментарии
Автор

Thanks for all these super helpful videos you are sharing, Sharvin 🙏🏼

SebAnt
Автор

You are a kind man. With every video you post, I learn something new. You explain so many complex topics in a simple and easy-to-understand way, without wasting a single word. Keep up the good work. I hope to see your videos more often. Thank you.

captainpike
Автор

The Flux model is soo amazing, I tried it on mimic, which can generate sub-pics based on original pictures and it can maintain high quality and texture

Bpmf-gu
Автор

Very nice video and very useful workflow! 3:00, the Schnell model isn't limited to 4 steps. You can increase the number of steps much beyond that. It's simply optimised to generate nearly optimal images (txt2img) with only 4 steps, which is what makes it faster to use, of course.

pokerandphilosophy
Автор

32GB of RAM and 8GB of video memory are enough even for a resolution of 2000x2600 (fp16 models). The quality is excellent, you don’t even need upscale. True, the generation for the dev model is very long.

tyeoupg
Автор

@CodeCraftersCorner An odd question: with img2img, can AI forced to basically reproduce the original image? if so, which setting would achieve this?

contrarian
Автор

You are always so useful. Thank you.
My regards to my beloved South Africa!

CharisTsevis
Автор

Thanks for always . such short and very updated information .

sunlightlove
Автор

Hi! What about turning stylized character to realistic? can It be done in same workflow but in reverse, adjusting prompt accordingly?

inanis_exe
Автор

you should try the fp8 safetensor models they are a lot smaller and work faster 👍

MrSmooX
Автор

Running flux schnell fp16 on 4gb of vram with no issues but I have 32 gb of ram

erikta
Автор

I've been running Dev, tried Schnell but deleted it as it's demonstrably worse quality and at the same model size of 24GB pointless keeping it. I've got a 3090 and upgraded to 64GB earlier today. I can run FP16 and haven't had any errors with it. One thing I have noticed is if I set the weight type to Default it takes over 500s to generate the image. If I change that to the fp8_e4m3fn that drops to around 28s, I still have the clip set to the T5 FP16 version but not sure if the weight type is over riding that and setting it to FP8 and that's why the time is so different.

runebinder
Автор

Just a constructive suggestion: try not to dance in front of the camera. Thanks for your videos and the timeliness with which you post.

AntonioSorrentini
Автор

11 minutes for 1 image?

And we 1729 seconds in your screenshot, so 28 minutes for 1 img2img?

Is that correct, and what was your setup and GPU for these tests?

Cheers!

geraldfabrot