Enhancing search on AWS with AI, RAG, and vector databases (L300) | AWS Events

preview_player
Показать описание
As AI continues to transform industries, the applications of generative AI and Large Language Models (LLMs) are becoming increasingly significant. This session delves into the utility of these models across various sectors. Gain an understanding of how to use LLMs, embeddings, vector datastores, and their indexing techniques to create search solutions for enhanced user experiences and improved outcomes on AWS using Amazon Bedrock, Aurora, and LangChain. By the end of this session, participants will be equipped with the knowledge to harness the power of LLMs and vector databases, paving the way for the development of innovative search solutions on AWS.


Subscribe:

ABOUT AWS
Amazon Web Services (AWS) hosts events, both online and in-person, bringing the cloud computing community together to connect, collaborate, and learn from AWS experts. AWS is the world’s most comprehensive and broadly adopted cloud platform, offering over 200 fully featured services from data centers globally. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—are using AWS to lower costs, become more agile, and innovate faster.

#AWSEvents #GenerativeAI #AI #Cloud #AWSAIandDataConference
Рекомендации по теме
Комментарии
Автор

Great presentation it covered so many concept seamlessly

Playbox
Автор

The thick accent makes hearing with concentration painful.

kathiemuhler