LLM in ComfyUI Tutorial

preview_player
Показать описание


Рекомендации по теме
Комментарии
Автор

Sebastian you are the golden standard for AI creators. Top notch. IDK how but you keep getting exponentially better with each upload.

michaelkircher
Автор

Thanks, this works really well with wildcard processor to feed the text into it.

lordlucifer
Автор

It's funny how I think of things that would be good or helpful in the AI world and then BOOM, you have a new tutorial video on exactly that thing!! I've been thinking about how to do this for a while... perfect that it's bolted right into ComfyUI!! Great video! Up and running immediately... Kind of a pain that the text can't really be edited without cutting and pasting into a regular prompt window, etc... But that's not on you friend! 5 by 5! You earned FiDolla!! Thank you!

GenoG
Автор

Thanks. I already use Ollama and Florence in ComfyUI. This LLM is a nice resource-efficient alternative.

matze
Автор

Thks a ppreciate the local and cloud options recos for those without the fancy hardware!

Cu-gpfy
Автор

Can Searge LLM be used in img2img for flux? I want an LLM model that can read my input image and generate a prompt for img2img.

VaiTag
Автор

Hello Sebastian, is there an alternative method to incorporate a positive prompt (clip text encoder) into this workflow to enhance the visual output?

RodrigoAGJ
Автор

how did you add height, width / INT in purple with "control_after_generate" is that a special node that you need to install from comfy UI manager? I keep seeing that in samples but cannot find it.

kritikusi-
Автор

I wonder if you ran into llama.dll error and how you resolved it. There is no resolution or fix on the Github page for that node.

bahethelmy
Автор

you can do more with ComfyUI node -Long-CLIP can give you a token length from 77 to 248 max

alg
Автор

Now I just hope this makes its way into Forge.

ElSarcastro
Автор

Will you be doing a video on animation in Flux using ComfyUI? Most of the tutorials I've seen are using external websites, rather than a local machine.

jonathanzeppa
Автор

Tutorial on how to create the thumbnail pic? It's gorgeous!

MustRunTonyo
Автор

Does fooocus do something similar, when expanding your prompts?

eduardmart
Автор

flux loves long prompts?I am always cutting my prompts shorter and shorter till I stop getting this weird error "RuntimeError: stack expects each tensor to be equal size, but got ..." I can't figure out what it means but shortening the prompt a little usually fixes it. if not, shortening it some more usually fixes it,

hsuan
Автор

What app did you use? COmfyUI? Why doesn't my comfyui look like yours?

SyamsQbattar
Автор

these two Searge nodes are a great addition I integrated them into 1 loRA Flux + flux1-dev-Q8_0.gguf + t5-v1_1-xxl-encoder-Q8_0.gguf + and it work 5s/it 1.25min to generate. Thank you.

xyy
Автор

lovely, thanks for sharing! btw, how'd you get that pretty little workflow icon on the sidebar?

ronnykhalil
Автор

So I'm still missing this... CheckpointLoaderNF4 - where is this?

SimpleTechAI
Автор

You should do another Seb Ross Discord weekly challenge video, but this time with Flux. I really enjoyed those.

Alchete