AI with Assurance: Combining Guardrails and LLM Evaluations

preview_player
Показать описание
Protecting your LLM application from dangerous or malicious inputs and outputs is critical in both development and production. Join us as we walk through how you can use Guardrails and Arize to secure your app, while measuring the frequency and severity of these attacks and errors.

We’ll cover the different types of guards that can be used and evaluated, as well as how these guards can be augmented through few-shot prompting leveraging the actual attacks you’re seeing in production.
Рекомендации по теме