Ep67: RAG Basics (Part 1): Why It Fails with LLMs

preview_player
Показать описание
AI models keep hallucinating—but why? In this episode of Machine Learning Made Simple, we break down why RAG (Retrieval-Augmented Generation) often fails and what the latest research says about fixing it.

We’ll cover:

✅ The limitations of naïve RAG models
✅ Why Contriever & Dense Retrieval improved things
✅ How RePlug & RAG Fusion enhance retrieval
✅ The problems with vector databases
✅ The future of RAG and how AI retrieval is evolving

If you’ve ever wondered why LLMs still struggle with retrieval, this is the deep dive you need!

00:00 Introduction
02:15 YOLO E – The Next Step in Object Detection
09:00 Naïve RAG – Why Early Retrieval Failed
11:52 Contriever: The Shift to Dense Retrieval
14:40 RePlug: Enhancing Queries for Smarter Retrieval
17:50 RAG Fusion: The Next Evolution of RAG
20:55 Vector Databases & The Limits of RAG

📩 Stay Connected & Never Miss an Update!

🚀 Join the Conversation

🎧 Listen on the Go!

📺 Watch the Full Podcast on YouTube!

💡 Enjoyed this content? Support us by:
👍 Like this video to support AI research discussions.
🔔 Subscribing to our channel for more cutting-edge AI insights

📣 We Want to Hear From You!
💬 Share your thoughts in the comments below—what excites or concerns you most about AI-powered automation?

#ArtificialIntelligence #MachineLearning #Python #MicrosoftAI #AIAutomation #AITutorial #TechNews #Podcast
Рекомендации по теме
join shbcf.ru