Building Better AI: Improving Safety and Reliability of LLM Applications

preview_player
Показать описание
Stop us if this sounds familiar... You got your LLM application working quickly, but you have no visibility into what’s happening behind opaque calls.
Or, you are evaluating and iterating, but you don’t know the effects of making a change. Or, your app is in production but you have no proactive protection to guard against jailbreaks, PII leaks, or bad performance.

These are important safety-related concerns to address, and when it comes to customer-facing LLM applications, safety is essential.

LLM applications often land companies in the news for recommending unsafe or even illegal practices. How do you deal with this so your company isn’t reduced to an embarrassing headline, and users actually enjoy using your app? You need observability, and Eric Xiao breaks it down into easy steps.
Рекомендации по теме