Coordinated Disclosure for ML: What's Different and What's the Same

preview_player
Показать описание
Presenter: Sven Cattell, AI Village

Red teaming is a validation step, not an accountability mechanism. A coordinated disclosure process for machine learning systems is needed to ensure safe, secure, and effective ML models. This session will cover the history of disclosures for machine learning systems, how they work and what makes them different from the coordinated vulnerability disclosure process of traditional software.

Рекомендации по теме