filmov
tv
Coordinated Disclosure for ML: What's Different and What's the Same
Показать описание
Presenter: Sven Cattell, AI Village
Red teaming is a validation step, not an accountability mechanism. A coordinated disclosure process for machine learning systems is needed to ensure safe, secure, and effective ML models. This session will cover the history of disclosures for machine learning systems, how they work and what makes them different from the coordinated vulnerability disclosure process of traditional software.
Red teaming is a validation step, not an accountability mechanism. A coordinated disclosure process for machine learning systems is needed to ensure safe, secure, and effective ML models. This session will cover the history of disclosures for machine learning systems, how they work and what makes them different from the coordinated vulnerability disclosure process of traditional software.