filmov
tv
Spring AI - Chat with your documents using RAG with locally running LLM #springai #rag #vectordb
Показать описание
In this video, I have explained, how to use Spring AI to chat with your documents, using RAG(Retrieval Augmented Generation).
I am using Ollama to run LLama3 locally. If you want to know how to setup Ollama, check out this video:-
GitHub link for the examples explained in the video:-
#springai #ollama #rag #llama3 #llm
00:00 Introduction
00:40 Vector DB
01:28 How do Vector DB work?
02:50 RAG (Retrieval Augmented Generation)
03:22 How RAG works?
04:40 Code example
15:30 Testing our code!
I am using Ollama to run LLama3 locally. If you want to know how to setup Ollama, check out this video:-
GitHub link for the examples explained in the video:-
#springai #ollama #rag #llama3 #llm
00:00 Introduction
00:40 Vector DB
01:28 How do Vector DB work?
02:50 RAG (Retrieval Augmented Generation)
03:22 How RAG works?
04:40 Code example
15:30 Testing our code!