Indirect Prompt Injection

preview_player
Показать описание
👩‍🎓👨‍🎓 Learn about Large Language Model (LLM) attacks! This lab is vulnerable to indirect prompt injection. The user carlos frequently uses the live chat to ask about the Lightweight "l33t" Leather Jacket product. To solve the lab, we must delete the user carlos.

Overview:
0:00 Intro
0:20 Insecure output handling
0:52 Indirect prompt injection
2:20 Lab: Indirect prompt injection
3:05 Explore site functionality
3:42 Probe LLM chatbot
4:29 Launch attacks via review feature
11:00 Conclusion
Рекомендации по теме
Комментарии
Автор

The payload you put in actually worked because the actual sequence required to escape is `}]}`. You just accidentally changed the sequence from `}]}` to `]}}` at 7:37. That's the reason why `]}}` didn't work but your final payload `}]}}` used to escape worked in this case. Because the first three chars match up which are enough to escape in this case

HemanthJavvaji-gg
Автор

man this lab is nonsense, thanks for the tutorial bro.

JaquavyWavyDikchole
Автор

I am bowing my head in front of your cyber security knowledge.

Lots of love from India 🇮🇳

sharmaskeleton