filmov
tv
Kasucast #22 - Stable Diffusion: High-resolution advanced inpainting ComfyUI (rgthree, IP-Adapter)
Показать описание
#sdxl #ComfyUI #comfyui #inpainting #sdxlturbo #stablediffusion #rgthree
I am joining StabilityAI in April 2024. Thanks for all the channel support!
This is a video about high-resolution advanced inpainting in ComfyUI. First, I show the differences between low-resolution and high-resolution inpainting. Afterwards, I integrate in the Acly inpaint nodes as well as the Fooocus inpaint model patch for SDXL. Alternatively, I show methods that don't use Acly's inpaint nodes to pre-process the image mask area as well.
Then, I show how to use IP-Adapter attention masking along with the high-resolution mask to transfer clothing. Interspersed throughout the video are rgthree node integrations for workflow debugging and context switching.
Resources:
Time stamps
00:00 Introduction
03:24 Resizing the image
05:04 Checking image size
07:00 Default mask
08:12 Preview bridge
09:23 Canvas tab
11:35 Naive inpainting
13:48 Issues with naive inpainting
15:06 Mask to region
16:34 Cut by mask
19:04 Resizing the crop image
19:47 Resizing the crop mask
20:45 Encoding the masked crop
21:48 Comparing naive and high-resolution inpainting
22:44 Compositing ksampler crop back onto the original image
25:40 Checking the robustness of the high-resolution inpainting workflow
27:32 Image blend by mask
27:56 Image downsampling on the composite
29:30 Acly's ComfyUI inpaint nodes (overview)
30:56 Acly's pre-process workflow
33:20 Fill masked options
34:38 Blur masked area
36:16 Fast inpaint
36:50 Outpainting with SDXL
41:44 Fill masked area (setup) with high-resolution inpainting
43:04 Rgthree nodes introduction
46:04 Fill masked area (visualize)
46:52 Integrating Fooocus patch
49:00 Tensor size mismatch error
50:16 Adding ControlNet depth
51:58 Rgthree bypass node on ControlNet depth
53:00 Adding ControlNet depth to the Fooocus patch workflow
54:06 Fill masked area integration
55:06 Comparing results with Acly pre-process
56:10 Image to mask error + fix
59:02 Fill masked area (blur)
59:36 Alternative blur method
01:00:34 Image composite masked
01:02:08 Fast inpaint
01:03:12 IP-Adapter overview
01:04:12 Removing redundant nodes
01:06:14 Series workflow (fill masked area)
01:07:40 Series workflow (blur masked area)
01:08:22 Group bypass/muter
01:09:12 Series workflow (fast inpaint model)
01:09:42 IP-Adapter crash course
01:11:20 Bypassing the IP-Adapter
01:12:20 Using high-resolution fast inpaint to remove objects
01:13:16 Applying reference image to high-resolution inpainting workflow
01:15:48 Integrating attention masking to the IP-Adapter inpainting
01:16:36 Why do we need double masking (attention + normal)?
01:18:06 Using IP-Adapter with multiple reference images
01:19:32 Addressing the opacity blend issue
01:20:48 Applying ControlNet to IP-Adapter
01:21:22 Adding pre-processing methods to IP-Adapter
01:22:29 Using multiple reference images for IP-Adapter
01:23:44 Compositing images inside ComfyUI (Canvas Tab)
01:26:48 Switch nodes (Comfy Impact)
01:29:16 Rgthree bookmarks
01:30:28 Pad image for outpainting
01:31:12 Integrating image padding into high-resolution inpainting workflow pt.1
01:31:44 pt.2
01:32:56 Outpainting from a bust-up image
01:34:20 Context nodes (rgthree)
01:35:52 Context switch (rgthree)
01:38:32 Replacing switch any with context switch
01:39:52 Toggle control system basics
01:41:40 Toggle between txt-2-img and high-resolution inpainting
01:43:12 Linking state across groups with relay node
01:44:20 Fixing the one-way relay issue
🎉 Social Media:
Images/processes may be fabricated and therefore not real. I am unaware of any illegal activities. Documentation will not be taken as admission of guilt.
I am joining StabilityAI in April 2024. Thanks for all the channel support!
This is a video about high-resolution advanced inpainting in ComfyUI. First, I show the differences between low-resolution and high-resolution inpainting. Afterwards, I integrate in the Acly inpaint nodes as well as the Fooocus inpaint model patch for SDXL. Alternatively, I show methods that don't use Acly's inpaint nodes to pre-process the image mask area as well.
Then, I show how to use IP-Adapter attention masking along with the high-resolution mask to transfer clothing. Interspersed throughout the video are rgthree node integrations for workflow debugging and context switching.
Resources:
Time stamps
00:00 Introduction
03:24 Resizing the image
05:04 Checking image size
07:00 Default mask
08:12 Preview bridge
09:23 Canvas tab
11:35 Naive inpainting
13:48 Issues with naive inpainting
15:06 Mask to region
16:34 Cut by mask
19:04 Resizing the crop image
19:47 Resizing the crop mask
20:45 Encoding the masked crop
21:48 Comparing naive and high-resolution inpainting
22:44 Compositing ksampler crop back onto the original image
25:40 Checking the robustness of the high-resolution inpainting workflow
27:32 Image blend by mask
27:56 Image downsampling on the composite
29:30 Acly's ComfyUI inpaint nodes (overview)
30:56 Acly's pre-process workflow
33:20 Fill masked options
34:38 Blur masked area
36:16 Fast inpaint
36:50 Outpainting with SDXL
41:44 Fill masked area (setup) with high-resolution inpainting
43:04 Rgthree nodes introduction
46:04 Fill masked area (visualize)
46:52 Integrating Fooocus patch
49:00 Tensor size mismatch error
50:16 Adding ControlNet depth
51:58 Rgthree bypass node on ControlNet depth
53:00 Adding ControlNet depth to the Fooocus patch workflow
54:06 Fill masked area integration
55:06 Comparing results with Acly pre-process
56:10 Image to mask error + fix
59:02 Fill masked area (blur)
59:36 Alternative blur method
01:00:34 Image composite masked
01:02:08 Fast inpaint
01:03:12 IP-Adapter overview
01:04:12 Removing redundant nodes
01:06:14 Series workflow (fill masked area)
01:07:40 Series workflow (blur masked area)
01:08:22 Group bypass/muter
01:09:12 Series workflow (fast inpaint model)
01:09:42 IP-Adapter crash course
01:11:20 Bypassing the IP-Adapter
01:12:20 Using high-resolution fast inpaint to remove objects
01:13:16 Applying reference image to high-resolution inpainting workflow
01:15:48 Integrating attention masking to the IP-Adapter inpainting
01:16:36 Why do we need double masking (attention + normal)?
01:18:06 Using IP-Adapter with multiple reference images
01:19:32 Addressing the opacity blend issue
01:20:48 Applying ControlNet to IP-Adapter
01:21:22 Adding pre-processing methods to IP-Adapter
01:22:29 Using multiple reference images for IP-Adapter
01:23:44 Compositing images inside ComfyUI (Canvas Tab)
01:26:48 Switch nodes (Comfy Impact)
01:29:16 Rgthree bookmarks
01:30:28 Pad image for outpainting
01:31:12 Integrating image padding into high-resolution inpainting workflow pt.1
01:31:44 pt.2
01:32:56 Outpainting from a bust-up image
01:34:20 Context nodes (rgthree)
01:35:52 Context switch (rgthree)
01:38:32 Replacing switch any with context switch
01:39:52 Toggle control system basics
01:41:40 Toggle between txt-2-img and high-resolution inpainting
01:43:12 Linking state across groups with relay node
01:44:20 Fixing the one-way relay issue
🎉 Social Media:
Images/processes may be fabricated and therefore not real. I am unaware of any illegal activities. Documentation will not be taken as admission of guilt.
Комментарии