Python Advanced AI Agent Tutorial - LlamaIndex, Ollama and Multi-LLM!

preview_player
Показать описание
Interested in AI development? Then you are in the right place! Today I'm going to be showing you how to develop an advanced AI agent that uses multiple LLMs.

🎞 Video Resources 🎞

⏳ Timestamps ⏳
00:00 | Video Overview
00:42 | Project Demo
03:49 | Agents & Projects
05:44 | Installation/Setup
09:26 | Ollama Setup
14:18 | Loading PDF Data
21:16 | Using llama Parse
26:20 | Creating Tools & Agents
32:31 | The Code Reader Tool
38:50 | Output-Parser & Second LLM
48:20 | Retry Handle
50:20 | Saving To A File

Hashtags
#techwithtim
#machinelearning
#aiagents
Рекомендации по теме
Комментарии
Автор

You are one of the best explainers ever. Out of 50 years listening to thousands of people trying to explain thousands of things. Also, it's raining and thundering outside and I'm creating this monster, I feel like Dr. Frankenstein

.MHz
Автор

I wanted to express my gratitude for the Python Advanced AI Agent Tutorial - LlamaIndex, Ollama and Multi-LLM! This tutorial has been incredibly helpful in my journey to learn and apply advanced AI techniques in my projects. The clear explanations and step-by-step examples have made it easy for me to understand and implement these powerful tools. Thank you for sharing your knowledge and expertise!

bajerra
Автор

If you keep getting timeout errors and happen to be using a somewhat lackluster computer like me, changing `request_timeout` in these lines

llm = Ollama(model="mistral", request_timeout=3600.0)
...
code_llm = Ollama(model="codellama", request_timeout=3600.0)

to a larger number (3600.0 is 1 hour, but it usually takes only 10 minutes) helped me out. Thanks for the tutorial!

AlexKraken
Автор

Some helpful things when going through this:
- Your Python version needs to be <3.12. I had to downgrade mine to 3.11.
- I'm on a Mac, so needed to open XCode to accept the terms and conditions, then reset scode-select using command "sudo xcode-select -r" to get it to build the llama_cpp_python wheel

briancoalson
Автор

You are by far my favorite tech educator on this platform. Feels like you fill in every gap left by my curriculum and inspire me to go further with my own projects. Thanks for everything!

samliske
Автор

Error 404 not found - local host - api - chat [FIX]
If anyone else gets an error like that when trying to run the llamacode agent, just run the llamacode llm in terminal to download it, as it did not download it automatically for me at least as he said around 29:11

So similar to what he showed at the start with Mistral:
ollama run mistral.

You can run this in a new terminal to download codellama:
ollama run codellama

_HodBuri_
Автор

I have never found anyone that explains code and concepts as well as you. Thank you for everything you do, it really means a lot♥♥

valesanchez
Автор

Great video. Would really like to see methods that didn't involve reaching out to the cloud but keeping everything local.

ftjemc
Автор

Excellent demo! I liked seeing it built in vs code with loops, unlike many demos that are in Jupyter notebooks and can’t run this way.
Regarding more demos like this…Yes!! Most definitely could learn a lot from more and more advanced LlamaIndex agent demos. Would be great to see a demo that uses their chat agent and maintain chat state for follow-up questions. Even more advanced and awesome would be an example where the agent will ask a follow up question if it needs more information to complete a task.

seanbergman
Автор

No idea what’s going on but I love falling asleep to these videos 😊

beautybarconn
Автор

Just used your code with llama 3, and made the code generator a function tool, and it was fvcking awesome. Thanks for sharing👍🏻

techgiantt
Автор

I was really looking forward to learn this. Thanks for the video

Batselot
Автор

Thank You for this very informative video. I really like the capabilities of 'LlamaIndex' with PDF's.
I used it to process several of my own medium-size PDF's and it was very quick and correct.
It would be great to have another vid on how to save and reuse the VectorStore for queries
against PDF's already processed. To me this is more important even than the code generation.

davidtindell
Автор

Amazing as always, Tim. Thanks for spending the time to walk through this great set of tools. I'm looking forward to trying this out with data tables and PDF articles on parsing these particular data sets to see what comes out the other side. If you want to take this in a different direction, I'd love to see how you would take PDFs on how different parts of a system work and their troubleshooting methodology and then throw functional data at the LLM with errors you might see. I suspect (like other paid LLMs) it could draw some solid conclusions. Cheers!

ChadHuffman
Автор

Great work TIM you hit it on the head, what put people of is downloading. Putting into a requirements file is a great idea

martin-xqte
Автор

Great vid.. only issue is the fact that the parsing is done externally. For RAG's ingesting sensitive data this would be a major issue.

vaughanjackson
Автор

This was fascinating, I'm definitely going to be giving it a whirl! I'd love to learn how something like this could be adapted to write articles using information from our own files.

garybpt
Автор

wow this is absolutely mind blowing, thanks Tim.

ravi
Автор

"If I fix these up." My god, Tim. You know that won't scale.

equious
Автор

This is very clear and very instructive, so much valuable information! Thanks for your work

jorgitozor