How to Test Models for Fairness with Fairlearn Deep-Dive

preview_player
Показать описание
Join us to learn about our open source machine learning fairness toolkit, Fairlearn, which empowers developers of artificial intelligence systems to assess their systems' fairness and mitigate any observed fairness issues. Fairlearn focuses on negative impacts for groups of people, such as those defined in terms of race, gender, age, or disability status.

There are two components to Fairlearn: The first is an assessment dashboard, with both high-level and detailed views, for assessing which groups are negatively impacted. The second is a set of strategies for mitigating fairness issues. These strategies are easy to incorporate into existing machine-learning pipelines. Together, these components empower data scientists and business leaders to navigate any trade-offs between fairness and performance, and to select the mitigation strategy that best fits their needs.

Learn More:

The AI Show's Favorite links:
Рекомендации по теме
Комментарии
Автор

Great tutorial but please update the dashboard it was a great tool

inshallai
Автор

Very helpful to understand how it works.

wolfgangeggert