Supercharging ComfyUI: Integrating LLMs with Ollama for Advanced and enhanced prompts.

preview_player
Показать описание
Unlock the full potential of your ComfyUI prompts by seamlessly integrating Large Language Models (LLMs) using Ollama DIRECTLY inside your workflow!

LLMs are a powerful tool that can be used to completely change the way you work with prompts.
In this video we will walk through the process step-by-step, showing how to set up Ollama easily and choose the right LLM for your needs.
We will see how to define a good instruction for the LLM so it can create exactly what you are looking for.

P.S. Don't forget to like, subscribe, and share the good vibes ❤️

00:00 Intro
00:25 Setting up Ollama
01:55 Choosing LLM models
03:28 Its alive...alive!
06:05 The Ollama nodes
10:10 Instructing the LLM
11:50 Plugin the first prompt
13:47 Advanced instructions
17:10 Use LLM to define an LLM persona
20:00 It can see!
22:37 Final thoughts

#generativeart #ai #aiart #texttoimage #theaIart #the-ai-art #comfyui #stablediffusion #generativeai #generativeart #art #image #compare
#tutorial #training #checkpoint #face-detailer #face #ollam #llm #flux
Рекомендации по теме
Комментарии
Автор

Thank you so much. This was easy to follow and worked perfectly for me. A very cool addition to my workflow!

Schnoidz
Автор

WOW!! So fascinating, I’ve liked and subscribed immediately.

SebAnt
Автор

Excellent and clear explanation. Thank you.

aitor
Автор

Heyy, from the very first days of LLMs and midjourney, I am using a very long, specific instructions thatI am creating my prompts. With latest developments with models like flux and such workflow you demonstrated, I will be able to create my design bundles automatically as opensource art generators comes just a bit more closer to midjourney =)

Thank you for the video mate, keep up

Best

atahanacik
Автор

A really nice tutorial, not too much info and not lacking either. As far as I can tell my directions were similar to yours but my generated prompts are frequently including instructions to the prompter instead of a nice clean prompt. E.g., Title: followed by the title., Prompt: followed by the prompt, Recomendations: followed by recommendations etc. Maybe I've been running Ollama too long I was running ollama on other projects prior to comfyui. I'll see how it goes tomorrow with a fresh start.

bwheldale
Автор

Liked, subscribed. Awesome stuff. Thanks. 😄

evolv_
Автор

Thank you! Gunna play with this one and see what works for render process is really fast!!!

yiluwididreaming
Автор

Try Joy Caption, its far superior than any thing you were demostrating for the Image captioning part

CharlesLijt
Автор

Hi. I've been using ComfyUI for a while now and maybe you could record the process of repairing old photos? Settings etc.? If you already have such a video because I haven't checked, maybe a link to it? I know there are such videos but I would honestly rely on your way of explaining the processes.
Thanks

pawelthe
Автор

Thanks for the video, it's great! i see at minute 8:16 that you use the llama3-70b model. I work all models except the 70b, do you know why it could be? Do you have to do something special, any extra configuration to make them work in ComfyUI?

eltalismandelafe
Автор

I made a similar node, and integrated Llava as well, you can check the ollama one by the author Fairy-Root

fairyroot
Автор

Hi, where does the models get saved to? I don't see them in the ComfyUI/models path, is it some where else?

aneelramanath
Автор

Sadly, with FLUX on a 4090 you get the ollama prompt then must shut down Ollama as you go over 24GB because Ollama keeps the model in memory. all fp8 in comfy and just not enough vram.

generalawareness
Автор

8:20 No matter what I did, the model was downloaded correctly, but it is not shown here. What is the solution?

mr.entezaee
Автор

Wondering if this will work with the GGUF nodes for Flux.

INVICTUSSOLIS
Автор

I have python according to manager but am unable to find the show text node. Any suggestions?

maindokontorora
Автор

6+min with flux to generate one image... Without llm it takes about 40s (rtx4080)

vasilybodnar
Автор

you dont need ollama anymore, theres a node that calls an llm out already exciting stuff

BabylonBaller
Автор

Hello, I'm Tess from Digiarty Software. Interested in a collab?

shirleywang
Автор

thanks for the video, interested in more LLM contentent within comyfui

sven