New Important Instructions: Real-world exploits and mitigations in LLM Apps

preview_player
Показать описание
2024, January 11th, 11.00 ET / 17.00 CET

With the wide-spread rollout of Chatbots and LLM applications users are facing increased risks of scams, data exfiltration, loss of PII, and even remote code execution when processing untrusted data with LLM apps. This presentation will cover many demonstrations of real-world exploits in prominent LLM apps, such as automatic plugin/tool invocation and data exfiltration in ChatGPT, data exfiltration in Bing Chat, Anthropic Claude and Google Bard. The talk also highlights approaches vendors have taken to fix vulnerabilities. Finally, we also take a look how it is possible to use ChatGPT Builder to create a malicious GPT, while seemingly benign, is tricking users and stealing data.

Рекомендации по теме