Red Teaming of LLM Applications: Going from Prototype to Production

preview_player
Показать описание
LLM applications are notoriously difficult to put into production, in large part due to the fact that there are so many potential vulnerabilities in their performance. From hallucinations to discriminatory behavior and prompt injection attacks, there are many ways LLM-based systems can fail to go beyond small-scale prototypes. In this breakout session, we'll focus on exploring techniques to detect and identify vulnerabilities in LLM applications; the aim is to transform the journey of LLM deployment into a secure and confident stride towards innovation. The session will introduce the concepts of LLM app vulnerabilities and the red-teaming process. We’ll then do a deep-dive on automated detection techniques and benchmarking methods. Attendees will leave the session with a better understanding of automated safety and security assessments tailored to GenAI systems.

Talk By: Corey Abshire, Senior AI Specialist Solutions Architect, Databricks

Here's more to explore:

Рекомендации по теме