Configure Amazon Bedrock Knowledge Bases with Pinecone Vector Database

preview_player
Показать описание
Amazon Bedrock offers a feature called Knowledge Bases, which allows you to connect Large Language Models (LLM) to additional data documents, such as PDFs, Markdown, HTML, Microsoft Word & Excel files, and more. This technique is also known across the industry as Retrieval Augmented Generation (RAG). Once you connect a model to your document store, in Amazon S3, you can query / prompt the model to answer questions or otherwise consume data that are stored inside of those documents. In order for this feature to work, you must connect Amazon Bedrock to a supported vector database. At the moment, Bedrock supports five different vector storage engines, including Amazon OpenSearch, Aurora Postgres, Pinecone, Redis Enterprise Cloud, and MongoDB. In this video, Trevor Sullivan (Solutions Architect, StratusGrid) explores setting up Pinecone DB as a vector storage engine, and connecting Amazon Bedrock Knowledge Bases to it.

Рекомендации по теме
Комментарии
Автор

fantastic video. great architecture, simple build and voiceover. great job

BrianGarback
Автор

So, in real time, we can't sync the data every time a new file gets uploaded. Isn't it better to create Lambda function that takes the files from S3 bucket and uploads it to Pinecone by setting S3 trigger to invoke that Lambda function whenever a PDF file is uploaded?

dikshyakasaju