Building and deploying multi-agent RAG systems in 2024 with LlamaIndex

preview_player
Показать описание
This is an introduction to LlamaIndex and in particular its agentic capabilities, covering:
* What is LlamaIndex, including:
* Frameworks in Python and TypeScript
* LlamaParse service for parsing complex documents for LLMs
* LlamaCloud service for end-to-end enterprise RAG
* LlamaHub for free downloads of data connectors, LLM adapters, embedding models, vector stores, agent tools and more
* Why you should use it, including:
* Build faster
* Eliminate boilerplate
* Avoid early pitfalls
* Get into production
* Deliver business value
* What can it do, including code examples of how to build:
* Retrieval augmented generation (RAG)
* World class parsing
* Agents and multi-agent systems
Рекомендации по теме
Комментарии
Автор

No matter what i do, create llama index doesnt work for me no matter how i set it up. So many errors and all the tutorials are outdated. Is there a discord or something I can go for help? I'm starting to feel like this build needs an update to function.

HM-gmkn
Автор

I don't understand why you would need a message queue between agents. Agents should not be micro services. It makes it slower and harder to debug.

cagdasucar
Автор

Are you reading all your presentation from slides?? That's ridiculous.

mihaelacostea