Chunking in RAG (Retrieval Augmented Generation) with hands-on in LangChain and LlamaIndex

preview_player
Показать описание
Retrieval Augmented Generation or RAG is becoming the go-to approach to address the shortcomings of LLMs like hallucinations and model training cut-off. In the video series on RAG, this video is about chunking the input text to be ingested into the Vector DB used in the RAG pipeline.

Hope it's useful.

⌚️ ⌚️ ⌚️ TIMESTAMPS ⌚️ ⌚️ ⌚️
0:00 - Intro
0:13 - RAG refresher
1:04 - Ingestion in RAG
1:27 - What is Chunking?
2:05 - Why Chunking?
4:06 - Fixed-Size Chunking
7:15 - Recursive Chunking
10:18 - Document / Code Chunking
12:17 - Semantic Chunking
16:40 - Conclusion

RELATED LINKS

🛠 🛠 🛠 MY SOFTWARE TOOLS 🛠 🛠 🛠

MY KEY LINKS

WHO AM I?
I am a Machine Learning researcher/practitioner who has seen the grind of academia and start-ups. I started my career as a software engineer 15 years ago. Because of my love for Mathematics (coupled with a glimmer of luck), I graduated with a Master's in Computer Vision and Robotics in 2016 when the AI revolution started. Life has changed for the better ever since.

#machinelearning #deeplearning #aibites
Рекомендации по теме