Access control for RAG and LLMs

preview_player
Показать описание
Today Cerbos introduces an access control use case for Retrieval Augmented Generation (RAG) and Large Language Models (LLMs), providing a timely solution for software builders looking for secure and practical ways to install guardrails around their AI applications. The functionality is available natively as part of Cerbos PDP and Cerbos Hub.

Loading corporate data into a vector store and using this alongside an LLM, gives anyone interacting with the AI agents root-access to the entire dataset. And that creates a risk of privacy violations, compliance issues, and unauthorized access to sensitive data.

Here is how the issue can be solved with permission-aware data filtering:
1️⃣ When a user asks a question to an AI chatbot, our tool, Cerbos, enforces existing permission policies to ensure the user has permission to invoke an agent. 
2️⃣ Before retrieving data, Cerbos creates a query plan that defines which conditions must be applied when fetching data to ensure it is only the records the user can access based on their role, department, region, or other attributes.
3️⃣ Then Cerbos provides an authorization filter to limit the information fetched from your vector database or other data stores.
4️⃣ Allowed information is used by LLM to generate a response, making it relevant and fully compliant with user permissions.

#Cerbos #Authorization #AccesscontrolRAG #RAG #LLM
Рекомендации по теме