filmov
tv
Safe RAG for LLMs
Показать описание
Large Language Models (LLMs) are pretty smart, but they don’t know everything. For example, an LLM might know why the sky is blue, but it probably doesn’t know more specific things, like which flight the user has booked. Many AI applications use Retrieval-Augmented Generation (RAG) to feed that sort of user-specific data to LLMs, so they can provide better answers.
However, malicious users can use specially engineered prompts to trick an LLM to reveal more data than intended. This gets especially dangerous if the LLM has access to databases through RAG. In this video, Wenxin Du shows Martin Omander how to make RAG safer and reduce the risk of an LLM leaking sensitive data that it gathered via RAG.
Chapters:
0:00 - Intro
1:15 - RAG
1:57 - Making RAG safer
3:11 - Architecture review
4:47 - Questions & Answers
5:47 - How to get started
6:09 - Wrap up
#ServerlessExpeditions #CloudRun
Speaker: Wenxin Du, Martin Omander
Products Mentioned: Cloud - Containers - Cloud Run, Generative AI - General
Safe RAG for LLMs
How RAG Turns AI Chatbots Into Something Practical
Prompt Engineering, RAG, and Fine-tuning: Benefits and When to Use
Build a Large Language Model AI Chatbot using Retrieval Augmented Generation
Is RAG an essential component of the LLM stack? #llm #aisafety #artificialintelligence
Build safe and reliable LLM applications with guardrails in this new course
Should You Use Open Source Large Language Models?
Feed Your OWN Documents to a Local Large Language Model!
🔥 ModernBERT: The Next Generation of Language Encoders | Technical Deep Dive | Listen now
What is Retrieval Augmented Generation (RAG) - Augmenting LLMs with a memory
Local Retrieval Augmented Generation (RAG) from Scratch (step by step tutorial)
How to set up RAG - Retrieval Augmented Generation (demo)
Building Safe and Secure LLM Applications Using NVIDIA NeMo Guardrails
Build your own RAG based LLM Application (Completely Offline!): AI for your documents
What is RAG? (Retrieval Augmented Generation)
Vector Database for GenAI and LLM Applications
Better Llama 2 with Retrieval Augmented Generation (RAG)
LLMs with 8GB / 16GB
Retrieval Augmented Generation explained for Beginners | RAG in LLMs
Guardrails for LLMs: A Practical Approach // Shreya Rajpal // LLMs in Prod Conference Part 2
Access control for RAG and LLMs
Hallucination is a top concern in LLM safety but broader AI safety issues lie beyond hallucinations
Vector Search RAG Tutorial – Combine Your Data with LLMs with Advanced Search
LangChain Parent-Child Retriever for better RAG
Комментарии