filmov
tv
Prompt Injection in LLM Agents (ReAct, Langchain)
Показать описание
In this video I’ll cover an article on prompt injection attacks against LLM-powered agents. The article is titled “Synthetic Recollections” and I published it on WithSecure Labs research blog, you can check it out at the link below.
🖹 Download the mindmap for this episode here:
🕤 Timestamps:
00:00 - Introduction
00:16 - Prompt Injection Demo
01:32 - Table of Contents
02:09 - Language Models
03:04 - Injection Attacks (SQL, Prompt)
05:45 - Emergent Abilities (Chain of Thought Reasoning, Reason+Act)
07:12 - The ReAct Loop (Agent, Executor, Tools)
09:10 - ReAct Agent in Action
13:29 - Thought/Action/Observation Injection in ReAct Agents
16:08 - Building Secure LLM Agents (OWASP Top Ten for LLMs)
📚 References & Acknowledgements:
🖹 Download the mindmap for this episode here:
🕤 Timestamps:
00:00 - Introduction
00:16 - Prompt Injection Demo
01:32 - Table of Contents
02:09 - Language Models
03:04 - Injection Attacks (SQL, Prompt)
05:45 - Emergent Abilities (Chain of Thought Reasoning, Reason+Act)
07:12 - The ReAct Loop (Agent, Executor, Tools)
09:10 - ReAct Agent in Action
13:29 - Thought/Action/Observation Injection in ReAct Agents
16:08 - Building Secure LLM Agents (OWASP Top Ten for LLMs)
📚 References & Acknowledgements:
Prompt Injection in LLM Agents (ReAct, Langchain)
Prompt Injection in LLM Browser Agents
Prompt Injections in the Wild - Exploiting Vulnerabilities in LLM Agents | HITCON CMT 2023
Prompt Injection / JailBreaking a Banking LLM Agent (GPT-4, Langchain)
5 LLM Security Threats- The Future of Hacking?
Prompt Injections - An Introduction
LLM Hacking: How Indirect Prompt Injection Works
What is Prompt Tuning?
Getting Started With Threat Modeling & Pentesting on GenAI Apps
Prompt Engineering And LLM's With LangChain In One Shot-Generative AI
Prompt Engineering is Dead; Build LLM Applications with DSPy Framework
Huggingface Agents: Multimodal Transformers Agents Are Here & Its Open Source
Prompt Injection: When Hackers Befriend Your AI - Vetle Hjelle - NDC Security 2024
Indirect Prompt Injection | How Hackers Hijack AI
[1hr Talk] Intro to Large Language Models
LLM01: Prompt Injection | LLM06: Sensitive Information Disclosure | AI Security
LLM Chronicles #6.4: LLM Agents with ReAct (Reason + Act)
LangChain Prompt Injection Webinar
Real-world exploits and mitigations in LLM applications (37c3)
👩🚒 LangChain Prompt INJECTION/HACKING?! LangChain Constitutional AI - Code Easy in 7 Minutes!...
Indirect Prompt Injections and Threat Modeling of LLM Applications | The MLSecOps Podcast
Rebuff | Open Source Framework To Detect and Protect Prompt Injection | LLM
Current state-of-the-art on LLM Prompt Injections and Jailbreaks
Securing Your AI Chatbots: Prompt Injection & Prevention Techniques
Комментарии