filmov
tv
Hacking a GPT is SHOCKINGLY easy – Learn how to reverse engineer GPTs through prompt injection

Показать описание
Let's put our hacker hats on and manipulate the inner workings of these OpenAI's custom GPTs to leak their instructions and knowledge-base files! Even well-protected GPTs are no match for a good prompt injection attack!
Whether you're a tech enthusiast or an AI aficionado, this video offers a rare glimpse into the capabilities and potential vulnerabilities of GPTs. Don't miss out on these insights and tricks – and learn how to protect yourself from these kinds of attacks!
Links:
00:37 - Quick overview of Custom GPTs
01:00 - Revealing GPT Instructions Verbatim
01:27 - Leaking small text files
03:12 - Leaking PDFs and other files
04:12 - Cracking protected GPTs
06:40 - Defintion: Direct prompt injection
07:09 - Definition: Indirect prompt injection
07:50 - Jailbreaking
08:26 - Virtualization
09:12 - Multi-Prompt
09:30 - Context Length
10:05 - Multi-Language
10:31 - Role-playing
10:38 - Token-Smuggling
11:15 - Code injection
11:46 - Protecting yourself
Whether you're a tech enthusiast or an AI aficionado, this video offers a rare glimpse into the capabilities and potential vulnerabilities of GPTs. Don't miss out on these insights and tricks – and learn how to protect yourself from these kinds of attacks!
Links:
00:37 - Quick overview of Custom GPTs
01:00 - Revealing GPT Instructions Verbatim
01:27 - Leaking small text files
03:12 - Leaking PDFs and other files
04:12 - Cracking protected GPTs
06:40 - Defintion: Direct prompt injection
07:09 - Definition: Indirect prompt injection
07:50 - Jailbreaking
08:26 - Virtualization
09:12 - Multi-Prompt
09:30 - Context Length
10:05 - Multi-Language
10:31 - Role-playing
10:38 - Token-Smuggling
11:15 - Code injection
11:46 - Protecting yourself
Комментарии