ComfyUI IPAdapter V2 style transfer workflow automation #comfyui #controlnet #faceswap #reactor

preview_player
Показать описание
Comfy-UI IPAdapter update
@latentvision, huge thanks for matteo - the creator!!!

In this video we will see how you can transfer a style from one image to other images with the help of the ipadapter v2, since we will want the style mainly on a background, we will also work with masks. And in addition we will see how the entire workflow can be automated

#comfyui #stablediffusion #ipadapter #mask #automation

workflow

juggernaut model

IPAdapter GitHub
Рекомендации по теме
Комментарии
Автор

Short and on point, real awesome tutorial. Thanks for sharing!

AlistairKarim
Автор

thank you! super helpful and just what i was looking for

ronnykhalil
Автор

Great workflow idea. Thanks.
I had that same error with BatchCLIPSeg others mentioned.
I found that it can be replaced with the CLIPSeg Masking (WAS Node Suite) node + ToBinaryMask (ImpactPack) node. That functions the same I think.

ptok
Автор

Thanks for the informative video! I am getting
There was an error while executing BatchCLIPSeg:

The input and output should have the same number of spatial dimensions, but the input received a tensor with spatial dimensions [1, 352, 352] and the output dimension is (832, 1216). Please provide the input tensor in the format (N, C, d1, d2, ..., dK) and the output dimension in the format (o1, o2, ..., oK). Can you tell me what I need to fix?

FeyaElena
Автор

Fantastic workflow! But... where can I find the Batchclip-Node? The Manager dosn't find it.

dpixelhouse
Автор

Another question, what does "Stop_at_clip_layer -1" do?

AlexDisciple
Автор

Thanks for this. I get the same error with BatchClipSeg "Input and output must have the same number of spatial dimensions, but got input with spatial dimensions of [352] and output sise [832, 1216]. Please provide input tensir in (N, C, d1, d2, ...., dk) format and output size in 9o1, o2, ..., oK) format"

AlexDisciple
Автор

at 1:52 where do I get this 1.5 model XL.safetensor and where do I put it? When I have the Load CLIP Vision up and use the pull down menu there is no model XL

ZergRadio
Автор

"Error occurred when executing IPAdapterTiled:

Error(s) in loading state_dict for Resampler:
size mismatch for proj_in.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1024])."

:( I've been fighting with this for a week now, can you please help me? I've gotten it all the way to the end of the workflow, but i get that error at the "IPAdapter Tiled" node.

RuinDweller
Автор

In the "IPAdapter Tiled" node I don't have the weight option for "Style Transfer (SDXL)" but i have "style transfer" and "style transfer strong." how do i get the "Style Transfer (SDXL)" option?

JoyHub_TV
Автор

Please make video on bg colour matching workflow ❤

Gifttv
Автор

Thanks for this amazing tutorial!

I am running into error:

Following your workflow once i run it I get error on BatchCLIPSeg:

Error occurred when executing BatchCLIPSeg:

Input and output must have the same number of spatial dimensions, but got input with spatial dimensions of [1, 352, 352] and output size of (832, 1216). Please provide input tensor in (N, C, d1, d2, ..., dK) format and output size in (o1, o2, ..., oK) format.

Any idea how to fix?

ideasinspiration
Автор

unfortunately i got the same error, Please provide input tensor in (N, C, d1, d2, ..., dK), any updates ?

naderreda
Автор

problem

Error occurred when executing BatchCLIPSeg:

Input and output must have the same number of spatial dimensions, but got input with spatial dimensions of [1, 352, 352] and output size of (832, 1216). Please provide input tensor in (N, C, d1, d2, ..., dK) format and output size in (o1, o2, ..., oK) format.

File "F:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)

File "F:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)

File "F:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))

File "F:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-KJNodes\nodes.py", line 2257, in segment_image
resized_tensor = F.interpolate(tensor, size=(height, width), mode='bilinear', align_corners=False)

File "F:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\functional.py", line 3934, in interpolate
raise ValueError(

pedroquintanilla