Using LLMs in ComfyUI with Ollama

preview_player
Показать описание
We show you how you can use LLM (Large Language Models) in ComfyUI workflows.

Download the workflows

TXT2IMG

IMG2IMG

Sources
IF_AI Git:

Ollama:

Join and Support me ### Support me on Patreon: / aifuzz Let’s be Instagram friends: / aifuzz1 Discord Coming soon
Рекомендации по теме
Комментарии
Автор

Really great video and topics and explanation. Great job. Was hesitating to get OLLAM but now I have a solid use case :)

alecubudulecu
Автор

very complate tutorial, comprehensive with workflow provided, thanks

frankclifestyle
Автор

Nice wf - really cool results from something so simple! Thank you. Helping me to wrangle the newly released SD3!

JackTorcello
Автор

Thank you, great tutorial very well explained, on the image to prompt if you leave it empty automatically creates SD prompts but you can also type a question to ask something about the picture.❤

impactframes
Автор

The presentation is bit 'all over the place' but hey, this is new technology. So who better to learn from than a young person? Keep it coming!

goodieshoes
Автор

"That's what she said" LOL 😂

AlexanderGarzon
Автор

may i know where to get the search bar for node?is it need to install anything? Besides what is c+ and c- does?any tips?

frankclifestyle
Автор

Great exposition. Tested with Llama 3, and sometimes it throws too much poetry in some of the prompts to feed the SD model, that seems to produce very soft and blurred dreamy images, ( perhaps for somebody is preferable), that´s depending of the profile selected, best results for my case ( impressionist painting style output ) are cortana or Yuka selected in profile, and selecting painting and impressionist (obviously), in embellish and style respectively. Thanks a lot for the tutorial.

heiferTV
Автор

You sound like a ai that was trained to speak with mistakes to sound natural lol

handsomelyhung
Автор

A big thank you for this tutorial.
In general it gives excellent results but unfortunately on my nvidia 3070 it is extremely slow. 20sec for one image 768*1024. LCM model
For Image to promp, the rendering time is 18 seconds but when I try to do it a second time its freeze.

philippeheritier