RAG with Langchain, Ollama Llama3, and HuggingFace Embedding | Complete Guide

preview_player
Показать описание
In this video, I'll show you how to create a powerful Retrieval-Augmented Generation (RAG) system using LangChain, Llama 3, and HuggingFace Embeddings. You'll learn how to make a Large Language Model (LLM) understand and answer questions about a complex PDF document.

➡️Chapters:
0:00 Introduction
2:25 Setting up the Environment
5:25 Setting Up LangChain
7:20 Chunking Large Documents
11:40 vector stores
12:30 HuggingFace Embedding
14:32 Retrieval QA
17:10 Run the chain
18:58 Conclusion

🔔 Subscribe for more tutorials and hit the notification bell to stay updated with the latest content!

🔗 Links

#langchain #ollama #llama3 #RetrievalAugmentedGeneration #huggingface
Комментарии
Автор

Please subscribe to Bitfumes channel to level up your coding skills.
Do follow us on other social platforms:

Bitfumes
Автор

Basic example, but good explanation 🙏 Thanks

TechnicalSeta
Автор

Great teaching method.
How would it be possible to use docker and fastapi in this project? I'm trying but I don't know where to start.
thank for the video!!

matrodrig
Автор

Why not use ollama an an opensource embeddings model and be fully local?

DanielBowne
Автор

hey actually i an very new to all these software, ai stuff, so i wanted to know what is that environment that you are using at time 2:25 mean is it a software that i need to download or is it some kind of python running terminal?? btw appreciate the simplicity of your explaination....really helpful for complete beginners

freefirepokeants
Автор

so do you need to replace \n\n pattern if it is there in document?

tribukh
Автор

Hi can please tell me what is your python version?

hefshineshivamnitinkhatavk