ComfyUI Img2Img Workflow With Latent Hires | Lora + Vae Workflow | ComfyUI Workflows

preview_player
Показать описание
#stablediffusionart #stablediffusion #stablediffusionai

In this Video I have Explained Basic Img2img Workflows In ComfyUI In detail.
ComfyUI is new User interface based On stable diffusion in which You can work with Node on ComfyUI.
I Have explained with A detailed workflow on How to Build Img2img ComfyUI Workflow From Scratch.

I hope so You Guys Like this Video
There are lots of things yet to explore in stable diffusion.
Do let me know in the comment section below How should I make videos regarding Stable Diffusion.

Follow Me On
Discord: ZooFromAI# 0737

Links To Downloads And Videos:-

Download Comfyui From Here:

You Can Check ComfyUI Examples Here:

_Music In this Video_
Upbeat Corporate Podcast by Infraction [No Copyright Music] / Marketing:
Рекомендации по теме
Комментарии
Автор

I don't have a question, I have a suggestion. In fact, in this interface, you can have several windows for promt. So we should try to have one window for the positive promt to prescribe the image background, and the second window for the positive promt to prescribe: the character or whatever you want. And then you can leave the background as it was originally and constantly add other characters or objects in different places. And this will not be limited. If it works, then that's great.I tried it today, but I did not understand if the VAE can be connected via "Reroute". but unfortunately there is only one input and output. So it is not clear what "Reroute" is for. And "SaveImage" you can connect one thing, unfortunately. Although now the GEN-2 video generation on the runway is underway, so things get even more interesting.
Thanks for the video.

michail_
Автор

i wonder if its possible to display the image you are generating from within the workflow space, to make comparison easier?

ateafan
Автор

what does the clip strength do on the lora node?

coreyhughes
Автор

Thank you for the informative video. Here is some (hopefully) constructive remarks:
- Actually do things you are talking about in the video and don't try to hide mistakes by cuts. The missing VAE connection in the latent up-scaler part that got fixed miraculously is just one example from this video...
- The unrelated image in the VAEDecode node in the latent up-scaler is suspicious - how did you achieve that?
- You can see the image in LoadImage node all the time - just make it bigger so the image has place to be rendered in...

hornyj
Автор

Thanks for this video! For some reason the image resulted from img to img is a very bad quality version, he tried to fix it with upscale, but it didn't work, it just generates an image with large dimensions but with poor quality

fito_
Автор

The eyes lost a lot of detail in the conversion process imo. It's a very interesting process though. Myself I have a lot of issues keeping a good eye detail output.

zimnelredoran
Автор

you've never mentioned lora. Is lora vae?

rsunghun
Автор

i only understand the part "as well"

nickchalion
Автор

Moving from automatic1111 to ComfyUI today

RongmeiEntertainment
Автор

Still feels like A1111 generates way better results

atanekoatan