RAG from Scratch with Llama 3.1 | Build Chatbot with Custom Data, Groq API, Sqlite-vec and FastEmbed

preview_player
Показать описание

Learn how to build a simple RAG system without external libraries like LangChain and LlamaIndex

👍 Don't Forget to Like, Comment, and Subscribe for More Tutorials!

00:00 - Why build RAG from scratch?
01:21 - Google Colab Setup
02:10 - sqlite-vec
04:52 - Add custom data to the database
06:54 - Create document embeddings
09:36 - How vectors are stored in the database
11:20 - Similar document search
13:36 - Build components for our RAG
17:38 - Asking the chatbot about our custom data
21:57 - Conclusion

Join this channel to get access to the perks and support my work:

#rag #chatbot #llama #chatgpt #llm #artificialintelligence #python #langchain #llamaindex
Рекомендации по теме
Комментарии
Автор

Great tutorial as usual😊. I have two queries:
1. You are using sqlite-vec for fast retrieval? Otherwise just store query and response in any db and using cosine similarity on the customer query fetch the top most relevant response.
2. Why are you using LLMs here, is it just bcoz to get the concise and short answer? Because LLMs might hallucinate in some cases which i think can be more harmful sometimes.

VLM
Автор

Hello, thanks for the great tutorial. I'm stuck at this section: "db =

sqlite_vec.load(db)
I'm doing everything but it says "OperationalError: The specified module could not be found." on sqlite__vec.load() side. Can you help me on that?

Onur-je
Автор

Can I export the chatbot so I can integrate it into an app?

surygarcia
Автор

Hi, how's it going? I want to integrate a chatbot into my website. NO WhatsApp.
Is it possible to create a CMS to manage this chatbot?

pepeka