[Webinar] Building LLM applications in a secure way (WithSecure™)

preview_player
Показать описание
This is the recording of a webinar we did at WithSecure about the risks of creating LLM applications that act as autonomous agents, and what can be done to mitigate these risks.

Timestamps:
00:10 - Where did LLMs come from?
05:36 - Building LLM applications
06:52 - LLM agents
08:10 - Misconceptions about AI safety
09:58 - Risks of LLM use-cases
11:30 - Prompt injection (demo)
18:46 - LLM agents
24:00 - Prompt Injection Demo in Browser Agent (Taxi AI)
30:06 - Root cause of LLM alignement issues
34:45 - Comparison with traditional injection attacks
37:14 - Controls and defences against prompt injection
46:45 - Take-away points
49:32 - Questions
Рекомендации по теме
Комментарии
Автор

Great talk - thank you. I use LMMs every day - but I have never given them autonomy to take control over my systems or data (I promote semi-automation with human in the loop). Human+AI will be more difficult to hack than only AI or only Human.

micbab-vgmu
Автор

For the moment being, that's absolutely the way to go. I worry about some of the autonomous agents that are becoming all the rage in the media, I love them from a technical point of view and the engineers working at those startups are creative heroes, but we have done fundamental issues still to solve with the underlying tech.

donatocapitella