Back to Basics: Understanding Retrieval Augmented Generation (RAG)

preview_player
Показать описание
As interest in Large Language Models (LLMs) grows, numerous developers and organizations are hard at work creating programs that take use of their potential. However, the topic of how to enhance the performance of the LLM application arises when the pre-trained LLMs do not function as anticipated or hoped for out of the box. At this stage model fine tuning or retrieval-augmented generation (RAG) to enhance the outcomes is needed. In this episode, join Nitin as he walks through what RAG is and best practices for implementation using AWS with Bedrock/FM and AWS services.

Additional Resources:

Check out more resources for architecting in the #AWS cloud:

#AWS #AmazonWebServices #CloudComputing #BackToBasics #GenerativeAI #AmazonBedrock
Рекомендации по теме
Комментарии
Автор

Great overview of RAG and details of implementation. Thank you.

sanjiv
Автор

How to protect a company's information with this technology?

JavierTorres-stgt
Автор

There is a fundamental question that arises when creating an augmented query. It is mentioned that relevant information is fetched from the internal source document based on the original query.
Since we have already retrieved relevant information from the sources, why are we passing the augmented query to the model again, given that we have already received the requested information from the internal source?

arpitbhasin
Автор

Why transactional data is not good for RAG?

suchitgupta
Автор

The animations are far too busy, and they lack sufficient highlighting.

MrMilesfinn