Stable Diffusion ComfyUI Text2img + Img2img Mega Workflow | Part 1 | Latent Hi-Res Fix | ComfyUI

preview_player
Показать описание
#stablediffusionart #stablediffusion #stablediffusionai

In this Video I have Explained Text2img + Img2Img Workflow On ComfyUI With Latent Hi-res Fix and Upscaler.
ComfyUI is new User interface based On stable diffusion in which You can work with Node on ComfyUI.
This is part 1 of my mega workflow that I am planning to create. Enjoy the workflow In detail on Stable Diffusion ComfyUI.

I hope so You Guys Like this Video
There are lots of things yet to explore in stable diffusion.
Do let me know in the comment section below How should I make videos regarding Stable Diffusion.

Follow Me On
Discord: ZooFromAI# 0737

Links To Downloads And Videos:-

Download Comfyui From Here:

_Music In this Video_
Upbeat Corporate Podcast by Infraction [No Copyright Music] / Marketing:
Рекомендации по теме
Комментарии
Автор

Most excellent workflow and great videos in your channel!! ❤️🇲🇽❤️

WhySoBroke
Автор

You Can Checkout This Workflow On My Discord Server
Just Browse To
comfyui workflow channel - Simple Workflow by Coronado
Download the Warrior PNG

CHILDISHYTofficial
Автор

Comfy uy looks very intuitive but can you use most of automatic 1111 plug ins??

xclavo
Автор

Great workflow! Thanks for sharing your knowledge! Question, have you see a workflow that includes “img2img - in-painting - only masked” in ComfyUI?

ysy
Автор

👏👏👏great video, congrats! Is there a way to drag the warrior.png inside comfyui and get the workflow automatically? I tried but it just drags me the image without flow

SandroGiambra
Автор

Thank you bro. Another masterclass from Childish YT. Could please explain why you have a VAEEncode then a VAEDecode in the ultrasharp upscale flow ?

deviance
Автор

To be honest, I don't see the point of switching from Automatic 1111 to ComfyUI yet. Nevertheless, I have installed ComfyUI. Thanks for the video. What if in ComfyUI you can combine two or three images. That is, one model generates the background, the other generates the character. Because models are designed to (good) generate either architecture or just a person's face. Of course, they can also generate other images. Although this can be done with Impanting, but then the background sometimes changes. It would have been nice if the developers had added three inputs or four on Utils Rerout. And here there are only two - input and output.

michail_
Автор

I want to ask you to make a video how to install and use comfy ui with same models from my installed stable diffusion.
İf anyone knows please help me

cmeooo