How I Build Local AI Agents with LangGraph & Ollama

preview_player
Показать описание


Chapters
Introduction: 00:00
Integrating Ollama (Theory): 01:00
Python Code Walkthrough: 05:31
vLLM (my preferred method): 13:35
Thoughts on Ollama for Agents: 16:19
Рекомендации по теме
Комментарии
Автор

Bro, you're tutorials are GOLD. You are literally the only one I've seen on the platform breaking it down like this. You fking ROCK 🤘

NoCodeFilmmaker
Автор

The way you handle software engineering principle is absolutely amazing!

ramishelh
Автор

Hi! I just wanted to say a huge thank you for the incredible work you’re doing and the knowledge you’re sharing. Your videos are full of inspiring content and valuable information that really help with personal development. I appreciate your effort and dedication. Thanks a lot, and I can’t wait for more great content!

tommayhew
Автор

I have massive ADHD and for some reason your minimal chill communication style is easy to listen to and your visuals are super helpful. Thanks for putting these together!

carktok
Автор

you method is leaps and bounds better than most. I enjoy your tutorials very much

unveilingtheweird
Автор

Amazing tutorial. I just tested your app using my RTX4090 and Llama3.1:8b. The results were impressive and latency was OK considering its running locally. I also tried with Llama3.1:70b and it worked great but too slow running locally. Llama3.1 looks like a game changer for local LLM apps.

RazorCXTechnologies
Автор

Just wanted to say again, great content, mate. As a self-taught/teaching AI engineer/programmer/content creator, your content is an incredible resource and inspiration. Keep it coming!

ZacMagee
Автор

Awesome, I would like to see how your unique approach works with incorporate an ollama embedding model + vector store.

CUCGC
Автор

How is this approach different than rather using ChatOllama instance of langchain, doesn't that handle everything on the backend?

IkshanBhardwaj
Автор

Fantastic and inspiring. At the end of your video you also answered a question I had regarding smaller LLM and hardware restrictions.

Active-AI
Автор

Only AI channel that actually is helpful.

lLvupKitchen
Автор

I really like content you produce man! keep it up! cheerss

akmlatc
Автор

I love your implementation. I’ll modify my pull request to use your Ollama implementation and resubmit for the SearXNG feature. I’ll try and follow your style to select between SearXNG and Serper.

ManjaroBlack
Автор

This makes so much sense... I designed a small workflow (no agents involved) to parse some tabular text data and do some reasoning on each row... I used llama3 8B. It worked ok but every few rows the response would not return in the correct format. Sometimes one of the main headers would come back with a typo. The solution I found was to catch the errors and when they occurred re-run the function. Not ideal of course but did the trick as it was a small job... now I understand it may just be that these smaller models are just not reliable when you need to work with structured responses...

vispinet
Автор

Hey, just curious: why not use the langchain wrappers of serper api and ollama api?

landob
Автор

If you still struggling like i do as a non coder is upload the entire video and code to Gemini 1.5 pro and ask it what you want, like how integrate to openrouter and it will do everything explain easier and update code

jarad
Автор

Great content and presentation, thank you. I would really like to see a workflow that uses local models to generate components of the output and then any one of the non-local models to synthesize the final output, a neo4j knowledge graph for shared memory between agents would be an amazing next step.

Haiyugin
Автор

Honestly this was a little over my head and I didn't fully grasp everything you said.

I've only been programming for a total of 3 months and then Python for less than a month, as a beginner programmer what are your thoughts about just trying to write Python scripts and using complex Python logic to try to pass responses and prompts between endpoints for ollama?

Like I said I'm a beginner so maybe I'm missing something do I need to have LangChain involved to just mess around like that?

banalMinuta
Автор

Hats down my friend 👏🙏🎩 we would like you to dedicate a video on using crewai within langgraph ❤

free_thinker
Автор

Hi, for me - so far, the best Ollama's structured output model was `codestral` (22b, and - if matters, has a non-commercial licence). I agree, we are not there yet with those SLMs. Maybe later, nov-dec this year.

mihaitanita