Security of LLM APIs

preview_player
Показать описание
A talk given by Ankita Gupta from Akto at the 2024 Austin API Summit in Austin, Texas.

In this session, we talked about API security of LLM APIs, addressing key vulnerabilities and attack vectors. The purpose is to educate developers, API designers, architects and organizations about the potential security risks when deploying and managing LLM APIs.

1. Overview of Large Language Models (LLMs) APIs
2. Understanding LLM Vulnerabilities:
– Prompt Injections
– Sensitive Data Leakage
– Inadequate Sandboxing
– Insecure Plugin Design
– Model Denial of Service
– Unauthorized Code Execution
– Input attacks
– Poisoning attacks
3. Best practices to secure LLM APIs from data breaches

I will explain all the above using real life examples.
----------
Рекомендации по теме