Hands-on RAG Tutorial using LlamaIndex, Gemini, and Pinecone Vector DB

preview_player
Показать описание
Let's talk about building a simple RAG app using LlamaIndex (v0.10+) Pinecone, and Google's Gemini Pro model. A step-by-step tutorial if you're just getting started!

--

Useful links:

--

Timeline:

00:00 Introduction
00:43 Basic definitions
02:18 How Retrieval Augmented Generation (RAG) works
03:55 Creating a Pinecone Index and getting an API Key
05:25 Getting a Google Gemini API Key
06:25 Creating a virtual environment
06:48 Installing LlamaIndex (and core packages)
07:41 Installing other dependencies
08:03 General application setup
10:42 Setting up environment variables
12:45 Validating configuration
14:11 Retrieving content from the Web
15:38 Explaining IngestionPipeline
16:49 Creating a LlamaIndex IngestionPipeline
17:16 Defining a Pinecone vector store
18:29 Running the IngestionPipeline (with Transformations)
19:37 Performing a similarity search
20:13 Creating a VectorStoreIndex
20:32 Creating a VectorIndexRetriever
21:04 Creating a RetrieverQueryEngine
22:05 Querying Google Gemini (Running the Pipeline)
22:47 Where to find the complete source code
23:15 Conclusion

@LlamaIndex @pinecone-io @GoogleDevelopers @Google
Рекомендации по теме
Комментарии
Автор

Hi there, thanks for the video. Would it be possible to achieve the same thing using Gemini Nano?

mkduffi
Автор

Thanks for the video bro

Have a doubt brother
You are using the pipeline which embeds each chunks and store the embedding included with meta data in pinecone with random id

But in my case the chunks need to be dynamic data, which i need to update it daily basis

How to store chunk with custom id in pinecone and modify the data cover time?
Also this method should not affect the similar search, that's my main problem

Can you help me in this?

cryptokingdom
Автор

Could you please tell me the name of the font and theme you are using in Visual Studio Code?

imharshvardhan