ComfyUI x Fooocus Inpainting & Outpainting (SDXL)

preview_player
Показать описание

Github

Models

Papers

My Other Videos:

In this video, we dive into the fascinating world of inpainting and outpainting using ComfyUI, showcasing a straightforward and effective approach.

Discover how we leverage the powerful capabilities of the Fooocus patch to transform any SDXL model into a high-performing inpaint model, delivering impressive results for inpainting tasks.

But that's not all! We also take a deep dive into various image preprocessing techniques, exploring their unique applications and understanding how they enhance different aspects of the process.

📈 Timestamps:
0:00 - Introduction
0:36 - Requirements
2:12 - Add Object
3:21 - Fooocus Patch
4:14 - Remove Object
7:20 - Outpainting
8:52 - IPAdapter
10:00 - Change Background
11:15 - End-to-End Example
14:07 - Bonus
16:25 - Outro

My Hardware Specs
GPU: 24GB VRAM
RAM: 32GB
CPU: 24 Core

👍 If this video helped you, do hit that like button, subscribe for more content, and let us know your thoughts in the comments below.
Рекомендации по теме
Комментарии
Автор

Thank you, this is what was stopping me from switching from fooocus to comfy.

drksgm
Автор

I didn't know about samdetector! You legend!!

josephparry
Автор

i got really hard time replicating the effectivness of fooocus outpainting within comfyui with sdxl checkpoint, on background is do almost decent with some error, but everytime i try to outpaint something with a character from a sdxl checkpoint, it only create a huge mess, like multiple body part or clothes mixed together etc..., tried multiple time i'm not sure what i do wrong or if it's a trouble within comfyui, witch had really hard time dealing with outpainting sdxl character. Because with another workflow i had for outpainting i used a 1.5 model and the outpainting worked very well, then i just change the checkpoint for a sdxl and got desastrous result.
Where in fooocus almost never used it, i just put my image selected the side i wanted to expand and run the generation without any prompt or any change at all, and got a really good output except for some hand not really well generated, , but hand are always hard to get on first time. Ithink they must use dark magic to make it work so effectively on first shot without any change in whatsoever.

phenix
Автор

Hi, very well explained. I have a question about the ImageCompositeMasked.
I used it in inpainting where i masked a place in a picture with Maskeditor. Unfortunately the mask lines are more clearly with the Imagecomposite node than without using it.

My problem is that i work with the image of myself, and i realise that while i inpainting a corner of the picture, the eyes change, but i havent mask the eyes at all and they are far away from the masked area.
So like you said The unmasked region is effected slightly, and i dont know how to fix this problem.

kikoking
Автор

amyone has issues using the load fooocus inpaint node, where it displays 'pickle data was truncated' ?

Jonchii
Автор

Hello i don´t know why but in the "Load fooocus Inpaint" the inpaint foocus patch is not detected, only the fooocus inpaint head

Juaktus
Автор

possible to add loras? and where to put it?

panonesia
Автор

Very interesting, thank you... have to try it out for better removing a figure from the background... because I often had that descibed problem that I couldnt remove the figure completly.
Do you have an idea how to realize the other way round? I look for a workflow in which I change backgrounds till I like the background, then keep it, and go on sampling new figures into the same background. Best would be a solution in which I am not using a selfpainted mask to tell the model where to put my figure, but let the model choose a good place for integration of my figure (usually a girl).

bobbyboe
Автор

For some reason when i apply the lama model for object removal nothing happens even though I have the same setup as you

WhatsThisStickyStuff
Автор

thanks for your video - I was able to get the basic flow to work following the picture on the comfyui-inpaint-nodes git hub page. I was wondering how to replicate the fooocus feature to improve a face. I tried reducing the denoise but this does not seem to have the desired effect. If you know how to do this maybe an idea for another video

michaelbayes
Автор

Thanks. Can't I use image prompt(cpds, faceswap, pyracanny) and inpaint at the same time on foocus?

lilillllii
Автор

As others have reported, getting error on inpainting mask fill... size problem. At 7:46 in the video, you also got that error. How did you resolve it?

DerekShenk
Автор

Using the workflow in the description I get this error "Error occurred when executing KSampler:

Allocation on device 0 would exceed allowed memory. (out of memory)
Currently allocated : 5.77 GiB
Requested : 1.22 GiB
Device limit : 8.00 GiB
Free (according to CUDA): 0 bytes
PyTorch limit (set by user-supplied memory fraction)
: 17179869184.00 GiB"

marxdrive
Автор

is this better than the fooocus implementation or the same?

hfoxhaxfox
Автор

I need help: i don't find find the models (places_512_Full Data. Pth) in the Load Fooocus inpaint. Also not other model in Load Fooocus Inpaint.
I did download them and placed them in Models_Inpaint.

Why

Kikoking-yb
Автор

downloaded all the models and nodes, gives me this error:
Error occurred when executing INPAINT_LoadFooocusInpaint:
invalid load key, '<'.

magic_number = pickle_module.load(f, **pickle_load_args)


anyone else have this problem?

sppie
Автор

Would it be possible to have your workflow?

ismael
Автор

Hi, just wondering do you have 1-on-1 session?

DDBM
Автор

In Outpainting i get this error: Error occurred when executing INPAINT_MaskedFill:

OpenCV(4.7.0) error: (-209:Sizes of input arguments do not match) All the input and output images must have the same size in function 'icvInpaint'


File "C:\Users\Luna\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)

File "C:\Users\Luna\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)

File "C:\Users\Luna\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))

File "C:\Users\Luna\Documents\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-inpaint-nodes\nodes.py", line 245, in fill
filled_np = cv2.inpaint(

eltalismandelafe