filmov
tv
Authorizing LLM Responses by Filtering Vector Embeddings – Greg Sarjeant Sponsored by Oso

Показать описание
We just sponsored an O’Reilly SuperStream webinar: Retrieval-Augmented Generation (RAG) in Production.
Large Language Models (LLMs) open up a new way to interact with your data. Your users can now use a natural language interface rather than traditional search to get back relevant information. But the flexibility and scale of LLMs make it harder to ensure that you don’t leak sensitive data. We explored these challenges and demo'ed how to use Retrieval-Augmented Generation to build an authorized LLM chatbot that protects your data.
00:00 Introduction
01:08 Stay out of the news
02:10 What is authorization
03:08 Authorization in LLMs
06:16 What changes with RAG
15:03 How to keep from oversharing
18:09 Can we instead filter the responses
24:12 Externalize authorization
28:41 Authorization Academy
Large Language Models (LLMs) open up a new way to interact with your data. Your users can now use a natural language interface rather than traditional search to get back relevant information. But the flexibility and scale of LLMs make it harder to ensure that you don’t leak sensitive data. We explored these challenges and demo'ed how to use Retrieval-Augmented Generation to build an authorized LLM chatbot that protects your data.
00:00 Introduction
01:08 Stay out of the news
02:10 What is authorization
03:08 Authorization in LLMs
06:16 What changes with RAG
15:03 How to keep from oversharing
18:09 Can we instead filter the responses
24:12 Externalize authorization
28:41 Authorization Academy