Coding Local AI Agent in Python (Ollama + LangGraph)

preview_player
Показать описание
In this video we build our own custom local AI agent in Python using Ollama and LangGraph.

◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾
📚 Programming Books & Merch 📚

💼 Services 💼

🌐 Social Media & Contact 🌐

Timestamps:
(0:00) Intro
(2:21) Preview of AI Agent
(7:46) Environment Setup
(14:50) Coding AI Agent
(43:21) Fixing Bugs
(48:45) Result
(49:56) Outro
Рекомендации по теме
Комментарии
Автор

amazing video! I was wondering whether you could do a video like that but with a more advanced flow and maybe with an mcp server instead of traditional tool calling.
I am thinking for a while about a way to leverage these small models to create complex systems without the need for an api (just open-source on consumer Hardware). Do you have any ideas on how to achieve that?
An Approach that I have in mind is:
the first llm just create a list of suggestion on what could be done. (does the whole <think>...</think> process)
then the flow goes like in the video. (an LLM do the actual work of calling the tools etc.)

This might make less error.
would like to know your take on that

mulham.
Автор

Thanks. Why/how is different from MCP and when would you use one over the other?

danielcoles
Автор

Great stuff!
Trying this with my gmail account but there is an auth failure, not yet sure how to fix it

bald_ai_dev
Автор

Does anyone know if I'll be able to do this on a M1 mac air with 8gb of RAM?

salmanmaarouf
Автор

Thanks for your content! Let me know what IDE do you using?

andreyg
Автор

Web devs who use AI are amateurs at best 🤣🤪😂💯👍👍👍

tibimutasunta
visit shbcf.ru