SDXL ComfyUI img2img - A simple workflow for image 2 image (img2img) with the SDXL diffusion model

preview_player
Показать описание
In this quick episode we do a simple workflow where we upload an image into our SDXL graph inside of ComfyUI and add additional noise to produce an altered image. I show you how to drop it into a standard workflow as well as how to adjust it to get as much difference we we would like. This is a simple graph and can be used as a point of departure for your AI art or other stable diffusion projects. Note that I don't go overboard here with the noise, and you can really push this into another realm if you add noise over 50%, so feel free to experiment!

#stablediffusion #sdxl #comfyui #img2img

Grab the SDXL model from here (OFFICIAL): (bonus LoRA also here)

The refiner is also available here (OFFICIAL):

Additional VAE (only needed if you plan to not use the built-in version)
Рекомендации по теме
Комментарии
Автор

I just want to say something else that maybe others have missed regarding your excellent tutorials. It's not JUST that they're technically good, it's also because your voice is SO easy to listen to. You could teach all day and it wouldn't be exhausting for the student I think.

nickuuk
Автор

i love that you build your workflows from scratch and explain each node !

TailspinMedia
Автор

Thankful that you are repeating the first steps in these videos. Makes it really a lot easier to remember

MTHMN
Автор

You are SIMPLY THE BEST !!! fluent, effortless, snappy, concise, to the point, crystal clear, ... you name it, man you are a Godsend 😘😘

Simn
Автор

This worked great!

Now I think I know enough to tie multiple previous tutorials together, to get several advantages. I don't know how to disable and reroute certain things easily, so it looks like I'll be maintaining multiple workflows.

spiralofhope
Автор

omg I've been working for hours to try to figure out how to integrate this into the refinement workflow from earlier in the playlist.

spiralofhope
Автор

Thanks again for your video series here - I think it's very important for those picking up Comfy UI - it has helped me understand workflows I've loaded up off of the example workflow site. And I really appreciate it when you zoom into a node you're doing something interesting with - too far out and it becomes a compressed blur.

RufusTheRuse
Автор

Your tutorials are really great. They convey so much useful info in an easy to digest way. Thanks!

ThoughtFission
Автор

I think this is still one of the best and easiest workflows that gives quick results, Thank you!

AlekseyMarin
Автор

I would love a tutorial explaining how to create 2 independent prompts, run them together for composition. Method (1) : Using position & area numbers (such as, starting from X0-x300, y0-y200 for prompt 1, etc.) Method (2) :Allowing the mouse-drawn mask area to be fed with the prompt -- please include previews of the masks being done & a preview of the mask combination. I'd really appreciate it. ComfyUI allows for greater composition control & understanding how to do it will really lean into it's strength

royjones
Автор

Im just got into image generation and thank you for your vids. I truly appreciate you sharing your knowledge 👍

kiretan
Автор

I would like to see a second part of this video from img2img adding ksample refiner to the image to improve the composition, thanks for the video and good work

leogamer
Автор

Amazing to see you are back to regular uploads😊

vVinchi
Автор

Your tutorials are amazing and incredibly informative. Really hoping you cover SDXL Lora training and maybe even animation creation using ComfyUI at some point?

Troyificus
Автор

Thank you! Really love working this way with SD. Can really fine tune the process to the graphics card and squeeze every drop out of the hardware.

seancollett
Автор

wow ! I love your tutorials, you are the best resource on comfyUI topic. I'll grab that "manager" stuff to have custom nodes.

hleet
Автор

It was nice spending the weekend with you, boss. Well, with your videos, anyway. You've got me psyched on ComfyUI.

Two things I hope to see you cover, eventually:

1.) You mentioned in another video that ComfyUI might be used for model training? Yes, please; and

2.) RE: the .yaml tweak for sharing a model folder touches on a larger issue. Play around with AI for a few days, and you end up with Oobabooga, TavernAI, Docker, Jupyter, the WSL, half a terrabyte's worth of LLMS, et al AND competing versions of Python (for X you need 3.10, for Y you need 3.11)... then there are the Condas and Cudas and-- you get the picture. As methodical and organized as you are, I'm guessing your hard drive is set up logically and efficiently. A peek into the nuts and bolts of your setup would be, as the kids say, amaze-balls...

Your pal,

-jjg

jjgravelle
Автор

Great video! Can you share the json file for this work flow?

pedroavex
Автор

Excellent tutorial, i cant wait to see how far you can go with ComfyUI

lakislambrianides
Автор

can you help to use controlnet on the comfyUI?

monbritt