CrewAI: AI-Powered Blogging Agents using LM Studio, Ollama, JanAI & TextGen

preview_player
Показать описание
🌟 Welcome to an exciting journey into the world of AI-powered blogging! 🌟

In today's video, I take you through a comprehensive tutorial on using Crew AI to create a multi-agent system for blog post creation, all running locally on your computer. We explore the integration of various open-source large language models and tools like LM Studio, Jan AI, Olama, and Text Gen Web UI (Text generation web UI), showcasing their power in content generation. 🤖💡

Key Highlights:
Introduction to Crew AI and its capabilities.
Step-by-step guide on integrating multiple open-source language models.
Detailed instructions on setting up and using tools like LM Studio, Jan AI, Ollama, and more.
Insights into creating and managing a group of AI agents for efficient blog post creation.
Tips and tricks for optimizing your AI blogging experience.

Timestamps:
0:00 - Introduction to Crew AI for Blogging
0:33 - Setting Up the Environment
1:06 - Integrating LM Studio and Other Tools
3:00 - Starting the Multi-Agent System
4:19 - Assigning Tasks to Agents
5:02 - Running the Crew
6:00 - Debugging and Fine-Tuning
7:01 - Comparing Open Source and OpenAI Models
8:00 - Final Thoughts and Tips

👨‍💻 Subscribe for more AI-related content, and don't forget to hit the like button to support the channel. Your engagement helps make videos like this available to a wider audience!

📣 Stay Connected:
Follow me on social media for more updates and AI insights.
Join our community forum for discussions and exclusive content.

🔗 Useful Links:

#crewai #local #agents #ai #crewai #agents #aiagents #autonomousagents #aiagent #autogenlocal #autoai #crew #opensource #opensourceagents #opensourcellmagents #llmagents #crewailmstudio #crewaiollama #crewaitextgenwebui #crewaitextgenerationwebui #lmstudio #lmstudio #janai #crewaijanai #crewailmstudio #crewaiollama #crewaijanai #janai #jan #local #crewailocal #crewailocal #crewprivate #crewaiprivate #crewailocalllm
Рекомендации по теме
Комментарии
Автор

for someone looking this video, now we have also ollama concurrency mode, different LLMs at the same time

Automan-AI
Автор

hacia tiempoI've been meaning to get back to your videos for a while, super great, you give a lot in a short time. Thank you very much :) We are going to continue testing open models and good prompts because there is no money for closed models :(

SonGoku-pcjl
Автор

What a video - it is like 7 videos in one 🤯
I saw and learned so much information just from this single tutorial Mervin

HyperUpscale
Автор

Hi, and thanks for sharing this video.

Could you consider making a video that demonstrates a setup utilizing Llava, local Stable Diffusion SDXL through an API key. This would be for locally running a local LLM for CrewAI/AutoGen applications.

joxxen
Автор

More! Autogen examples please! Especially when the next upgrade to Autogen Studio drops! Exclamation marks, that means thank you!

ronnetgrazer
Автор

Hi Mervin, is it possible to intergrate LLM and Stable Diffusion in ComfyUI api as well?

benjaminlaw
Автор

I think now LM Studio can use LLAVA right? Can you give some example with Agents?

nufh
Автор

Grate video, is there not a way to train / fine tune the next agent on the out put of the previous LLM so it dose not for get the previous answers, or would that take to long, so store each output in to a vector database and make impedings that each LLM can access? Would that stop the output data from getting shortened ?

-un
Автор

Hi, I'm a big fan.
Could you add examples for your videos using APIs from together ai, Mistral, Hugging Face, or any other Open Source LLM API provider?

DobleaArias
Автор

I guess it is possible to run ollama on the colab notebook. Have you tried mixtral for example to get blog posts ?

basicvisual
Автор

Hey man, I was wondering what are the minimum PC specs to run this environment locally?

yandelyano
Автор

Just a question Mervin,
Wouldn’t it be easier to do this using Autogen studio by creating different agents? I’m fairly new to this so I’m most probably wrong 😅

surajthakkar
Автор

Thank you for your Great content!
What do you think of using Gemini Pro for this crew ?

omountassir
Автор

Hi, I am getting a "APIConnectionError: Connection error." while running the final kickoff command. Does anyone know a quick fix for this?

harshkondkar
Автор

Which LLM resembles OpenAI functionality the most? Meaning I won’t get as many bs answers compared to the other models. I’m on Mac M1 (if the arch type makes a difference). I’ve been hearing about Mistral.

I’m at a point where the OpenAI bills will be too high. I’m investing in locally ran llms nowadays

autonate_ai
Автор

I wonder what 7b parameter llm would work well for this use case. Maybe using openhermes/starling or solar 10.7 ?

figs
Автор

I seem to be running out of memory on my M1 mac. Do you think running one on the models, lets say Jan Ai on a different machine will work?

rgm
Автор

isn't Ollama and LM Studio replacement for each other? why have both?

wryltxw
Автор

Mervin - great videos. have you found any tools that enable agents to access a full codebase?

javi_park
Автор

Thank you can u now put a gradio interface on this please . Show us how please

NetZeroEarth