ComfyUI: Image to Line Art Workflow Tutorial

preview_player
Показать описание
This is a comprehensive and robust workflow tutorial on how to set up Comfy to convert any style of image into Line Art for conceptual design or further processing. The workflow utilizes some unique methodology which includes using ControlNet Lora, IP Adapter, Blip, Combine Prompting, and more. The Workflow supports Batch Processing, Background Removal, Upscaling, Post Processing effects like Color Removal, Gray Shading Removal, Line Thickness, Edge Enhancement, etc.

------------------------

Relevant Links:

------------------------

TimeStamps:

0:00 Intro.
1:02 Requirements.
5:45 Nodes Setup Part 1.
12:41 Nodes Setup Part 2.
16:17 Connecting the Nodes.
29:30 Editing Blip, Upscaling.
35:29 Removing Color & Grayscale.
38:56 Randomize Batch Process.
39:57 Fine-Tuning Example.
41:22 Background Removal.
Рекомендации по теме
Комментарии
Автор

One of the Channel members requested Batch Processing. We have included two extra workflows which allow you to batch process all images in a folder and hit queue prompt only once. Check out the Channel Members Post for the workflow links.

@ 3:44: Note that the Models listing has been changed after the latest ComfyUI / Manager Update. Download both the ViT-H and ViT-bigG models from "Comfy Manager - Install Models - Search clipvision". Here is the chart for IP-Adpater with the compatible ClipVision model.

ip-adapter_sd15 - ViT-H
ip-adapter_sd15_light - ViT-H
ip-adapter-plus_sd15 - ViT-H
ip-adapter-plus-face_sd15 - ViT-H
ip-adapter-full-face_sd15 - ViT-H
ip-adapter_sd15_vit-G - ViT-bigG
ip-adapter_sdxl - ViT-bigG
ip-adapter_sdxl_vit-h - ViT-H
ip-adapter-plus_sdxl_vit-h - ViT-H
- ViT-H

controlaltai
Автор

Hey, just wanted to say thanks for putting these videos out! I've checked out a bunch of tutorial channels, and yours is honestly the best. No clickbait, just straight-up fun and interesting stuff. Really appreciate it – keep up the awesome work! Can't wait to see what's next.

deniansouza
Автор

I had the "The size of tensor a (3) must match the size of tensor b (9) at non-singleton dimension 0" error too, after reinstalling the blib checkpoint everything worked now, thank you very much. Awesome workflow!

freakern
Автор

Thank you very much for your work. In my opinion, you have the best tutorials! Clearly, competently, without unnecessary details, but at the same time everything is clear! Thank you!

VovaYus
Автор

Quite an elegant workflow. Thank you, I'll keep it in mind📝

danilsi
Автор

it is a very smart designed workflow, and was very helpful, thanks !

hakandurgut
Автор

I hope the translation will be correct: Thank you very much for this workflow, I've been looking for it on the net for a while. many thanks. I subscribed to your YouTube channel. Continue...

wolftot
Автор

This is such a powerful workflow, I really appreciate it. And what amazing support you give to your members! I couldn't get the workflow to work, but you really stuck with me and gave me a solution within just a few hours of my reporting it. Amazing!

timfox
Автор

can you make a tutorial on how to make a photo into a flat vector illustration? :)

marcososa
Автор

great lesson, If the final image still has some black filled area (not grey), how can make it only outline? thanks.

EricYang-ck
Автор

This works really well. However, I only have a GTX 1660 Super with 6GB VRAM and one render takes 70 minutes! I put the Sampler Steps down as low as 6 using DDIM/DDIM Uniform and got the time down to around 27 minutes. Still get a very usable result at 6 steps. Not even gonna try an upscaler in this workflow.

joeduffy
Автор

Hi, thanks for the video. I'm getting the following error:

Error occurred when executing Zoe-DepthMapPreprocessor:

CUDA error: operation not supported
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

I have the device set to cpu on BLIP Model Loader. I run ComfyUI with ZLUDA on an AMD 6800XT. Any help is appreciated!

lepontRL
Автор

I got this error: The operator is not currently implemented for the MPS device. it stopped on Zoe Depth Map :(

marcososa
Автор

This was AMAZING ! Thanks you VERY much !!!

garystanding
Автор

I'm getting an error: "Error occurred when executing IPAdapterAdvanced: The size of tensor a (257) must match the size of tensor b (577) at non-singleton dimension 1"...any ideas/suggestions on how to get this running properly? I've followed all the advice on updating IPAdadpter to IPAdapterPlus and have everything configured (as far as I know) correctly...

Jason-c-ig
Автор

Thank you for makeing this. Been looking ages for something like this for doing photos to engrave. But I am about to give up and can't figure out what is wrong.
I google and search around but no fix to be found. Gives a IPAdapter error. Suggestions?

Error occurred when executing IPAdapterApply:
Error(s) in loading state_dict for ImageProjModel:
size mismatch for proj.weight: copying a param with shape torch.Size([8192, 1280]) from checkpoint, the shape in current model is torch.Size([8192, 768]).

titanitis
Автор

I've got this working reasonably well on my computer. Thanks! Is there a way to use other models, such as Flux with this workflow or would that break all of the other nodes?

LanceT.
Автор

I am facing this error, RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0!
(when checking argument for argument mat1 in method wrapper_CUDA_addmm)

How should I solve this problem?

amirk
Автор

Hi, thanks for the great workflow.

For me the workflow uses a lot of resources and Google Colab keeps kicking me out even though I bought credits, but unfortunately I only have 12GB ram on Colab and this is always exceeded with the workflow.

What hardware specification do you use to run this workflow without problems?

tedteddy
Автор

I really like your generation of Line Art image and many other clips. You are awesome.
Is there a way comfyui can convert a realistic portrait photo to a painterly photo or watercolor or anime without distort facial features of a person? Could you build such a workflow please? I wonder if comfyui can really convert a people portrait to how much beautiful painting photo.

suthamchindaudom