Stable Diffusion ComfyUI Married With Ollama LLM - A Streamline Prompting Workflow

preview_player
Показать описание
In this video, we're excited to introduce you to the revolutionary AI Image and Video creation process using Ollama and ComfyUI. With the power of large language models and stable diffusion, you can now bring your ideas to life like never before.

If You Like tutorial like this, You Can Support Our Work In Patreon:

Say goodbye to complicated steps and multiple tools. Ollama streamlines the workflow, allowing you to generate stunning visuals, immersive animations, and engaging stories all in one place.

In this video, we'll walk you through the process of setting up Ollama on your local machine, downloading large language models, and using the custom node "IF Prompt To Prompt" or "ComfyUI If AI Tools" to generate prompts for stable diffusion style of image generation.

We'll show you how to connect the custom node with Ollama, create workflow templates, and even generate different styles of images using stable diffusion and Clip Visions. Plus, we'll compare the IP adapter with the stable diffusion method to help you understand the differences and choose the right approach for your projects.

Whether you're a beginner or an experienced creator, this video is packed with valuable insights and practical tips to enhance your AI image and video creation process.

Don't miss out on this opportunity to unlock your creativity and take your projects to the next level with Ollama and ComfyUI. Watch the video now and start creating breathtaking visuals today!
Рекомендации по теме
Комментарии
Автор

😀Hi thank you so much for making this video and letting more people know about IF_AI_tools custom_node . It is really well explained I put the link to this video on the repo. Thank you please stay tune I will be updating more feature soon

impactframes
Автор

This is super cool! Just tested it out and it's working, I'm amazed

igor_timofeev
Автор

Great tutorial! Thanks! I love to see more new idea like this for image crossover with LLM.

kalakala
Автор

Great video, I haven't used comfyui but this might get me to try that looks amazing!

unknownuser
Автор

Can you do a video about using Llava nodes to batch output image captions for training but nudging the model to look at specific aesthetic elements in the dataset images like lighting or photography style?

___x__x_r___xa__x_____f______
Автор

Hello, I listened to your lecture very well, but I received this error message in the area where the backend is run using the ollama serve command.

Error: listen tcp bind: Only one usage of each socket address (protocol/network address/port) is normally permitted.

Can you tell me what the problem is?

Dehsf
Автор

It is interesting, How does it compare to using Blip and wd14 captioning? Using those has been working pretty good for me.

xcom
Автор

we went from "u can still do art just type the prompt" to "it writes the prompt for you" real fucking quick

toothpastesushi
Автор

Blender/Maya with Ollama and Comfyui = pipeline ?

MilesBellas
Автор

Hi, thank you for sharing this video. I'm intrigued by the idea of customizing a Ollama model to use my own vocabulary. I wonder if it's feasible to refine the model to generate images based solely on the words I provide. Do you think this is achievable?

Mayssus-qpjy
Автор

there is a problem, Load images (path) does not connect to the If image to prompt node, is there any way to solve this?

EvgenyCh-thdc
Автор

Is this the same as IF_prompt_MKR for Automatic1111, Forge, etc? If so, will you do a tutorial on it?

TheColonelJJ
Автор

I use ComfyUI on LightingAI. Is it possible to connect with local llama?

luckypenguin
Автор

How do u install the models? This is the third video I've watched and people skip how to install the model

shareeftaylor
Автор

How to save prompt words, call, and save the next loading time of Ollama when Ollama is repeatedly loaded every time it is generated?

laoAA-eskg
Автор

I wonder if you can incorporate LLM studio instead...i imagine its just a path and port?

GrantLylick
Автор

Question:

I got Ollama installed and running per your script that I grabbed off your Patreon page, it all seems to work (thank you for that), I’ve downloaded the models and I’m not getting the same results when it analyzes the image. I put in a picture of a plane parked on a runway with a stormy sky in the background and the analysis of the image consistently isn’t even in the zone; it keeps thinking it’s a skyscraper, or a lake setting, or a circuit board, etc. Pretty much everything other than what’s in the image. Any ideas why?

teealso
Автор

just curious, how was the video with the flying cars achieved? I've been unable to make anything like that.

teealso
Автор

How long were those videos made in one process?

datrighttv