RAG Evaluation (Answer Hallucinations) | LangSmith Evaluations - Part 13

preview_player
Показать описание
With the rapid pace of AI, developers are often faced with a paradox of choice: how to choose the right prompt, how to trade-off LLM quality vs cost? Evaluations can accelerate development with structured process for making these decisions. But, we've heard that it is challenging to get started. So, we are launching a series of short videos focused on explaining how to perform evaluations using LangSmith.

This video focuses on RAG (Retrieval Augmented Generation). We show you how to check that your outputs are grounded in the retrieved documents of your RAG pipeline. You can use LangSmith to create a set of test cases, run an evaluation against retrieved documents, and dive into output traces – helping you ensure your responses are hallucination-free.

Documentation:
Рекомендации по теме
Комментарии
Автор

Hi dear friend .
Thank you for your efforts .
How to use this tutorial in PDFs at other language (for example Persian )
What will the subject ?
I made many efforts and tested different models, but the results in asking questions about pdfs are not good and accurate!

mohsenghafari