Hypnotized AI and Large Language Model Security

preview_player
Показать описание

Large language models (LLMs) are awesome, but pose a potential cyber threat due to their capacity to generate false responses and follow hidden commands. In a two-part discussion with Chenta Lee from the IBM Security team, it first delves into prompt injection, where a malicious actor can manipulate LLMs into creating false realities and potentially accessing unauthorized data. In the second part, Chenta provides more details and explains how to address these potential threats.

#ai #llm #cybersecurity
Рекомендации по теме
Комментарии
Автор

Really amazing topic, thank you very much.

ozio.
Автор

It is surprising to me, how the one on the right pronounces "game".

itdataandprocessanalysis