filmov
tv
Simpler Machine Learning Models for a Complex World

Показать описание
While the trend in machine learning has tended towards building more complicated (black box) models, such models have not shown any performance advantages for many real-world datasets, and they are more difficult to troubleshoot and use. For these datasets, simpler models (sometimes small enough to fit on an index card) can be just as accurate. However, the design of interpretable models for practical applications is quite challenging for at least two reasons: 1) Many people do not believe that simple models could possibly be as accurate as complex black box models. Thus, even persuading someone to try interpretable machine learning can be a challenge. 2) Transparent models have transparent flaws. In other words, when a simple and accurate model is found, it may not align with domain expertise and may need to be altered, leading to an “interaction bottleneck” where domain experts must interact with machine learning algorithms.
In this talk, Prof. Rudin will present a new paradigm for machine learning that gives us insight into the existence of simpler models for a large class of real-world problems and solves the interaction bottleneck. In this paradigm, machine learning algorithms are not focused on finding a single optimal model, but instead capture the full collection of good (i.e., low-loss) models, which we call “the Rashomon set.” Finding Rashomon sets is extremely computationally difficult, but the benefits are massive. Prof. Rudin will present the first algorithm for finding Rashomon sets for a nontrivial function class (sparse decision trees) called TreeFARMS. TreeFARMS, along with its user interface TimberTrek, mitigate the interaction bottleneck for users. TreeFARMS also allows users to incorporate constraints (such as fairness constraints) easily.
Prof. Rudin will also present a “path,” that is, a mathematical explanation, for the existence of simpler yet accurate models and the circumstances under which they arise. In particular, problems where the outcome is uncertain tend to admit large Rashomon sets and simpler models. Hence, the Rashomon set can shed light on the existence of simpler models for many real-world high-stakes decisions. This conclusion has significant policy implications, as it undermines the main reason for using black box models for decisions that deeply affect people’s lives.
Prof. Rudin will conclude the talk by providing an overview of applications of interpretable machine learning within my lab, including applications to neurology, materials science, mammography, visualization of genetic data, the study of how cannabis affects the immune system of HIV patients, heart monitoring with wearable devices, and music generation.
This is joint work with Prof. Rudin’s colleagues Margo Seltzer and Ron Parr, as well as their exceptional students Chudi Zhong, Lesia Semenova, Jiachang Liu, Rui Xin, Zhi Chen, and Harry Chen. It builds upon the work of many past students and collaborators over the last decade.
Speakers:
Cynthia Rudin
Professor and Director of the Interpretable Machine Learning Lab
Duke University
Moderators:
Matthias Groeschel
Resident physician
Charité - Berlin University of Medicine
The AI for Good Global Summit is the leading action-oriented United Nations platform promoting AI to advance health, climate, gender, inclusive prosperity, sustainable infrastructure, and other global development priorities. AI for Good is organized by the International Telecommunication Union (ITU) – the UN specialized agency for information and communication technology – in partnership with 40 UN sister agencies and co-convened with the government of Switzerland.
Join the Neural Network!
The AI for Good networking community platform powered by AI.
Designed to help users build connections with innovators and experts, link innovative ideas with social impact opportunities, and bring the community together to advance the SDGs using AI.
Watch the latest #AIforGood videos!
Stay updated and join our weekly AI for Good newsletter:
Check out the latest AI for Good news:
Explore the AI for Good blog:
Connect on our social media:
Disclaimer:
The views and opinions expressed are those of the panelists and do not reflect the official policy of the ITU.
In this talk, Prof. Rudin will present a new paradigm for machine learning that gives us insight into the existence of simpler models for a large class of real-world problems and solves the interaction bottleneck. In this paradigm, machine learning algorithms are not focused on finding a single optimal model, but instead capture the full collection of good (i.e., low-loss) models, which we call “the Rashomon set.” Finding Rashomon sets is extremely computationally difficult, but the benefits are massive. Prof. Rudin will present the first algorithm for finding Rashomon sets for a nontrivial function class (sparse decision trees) called TreeFARMS. TreeFARMS, along with its user interface TimberTrek, mitigate the interaction bottleneck for users. TreeFARMS also allows users to incorporate constraints (such as fairness constraints) easily.
Prof. Rudin will also present a “path,” that is, a mathematical explanation, for the existence of simpler yet accurate models and the circumstances under which they arise. In particular, problems where the outcome is uncertain tend to admit large Rashomon sets and simpler models. Hence, the Rashomon set can shed light on the existence of simpler models for many real-world high-stakes decisions. This conclusion has significant policy implications, as it undermines the main reason for using black box models for decisions that deeply affect people’s lives.
Prof. Rudin will conclude the talk by providing an overview of applications of interpretable machine learning within my lab, including applications to neurology, materials science, mammography, visualization of genetic data, the study of how cannabis affects the immune system of HIV patients, heart monitoring with wearable devices, and music generation.
This is joint work with Prof. Rudin’s colleagues Margo Seltzer and Ron Parr, as well as their exceptional students Chudi Zhong, Lesia Semenova, Jiachang Liu, Rui Xin, Zhi Chen, and Harry Chen. It builds upon the work of many past students and collaborators over the last decade.
Speakers:
Cynthia Rudin
Professor and Director of the Interpretable Machine Learning Lab
Duke University
Moderators:
Matthias Groeschel
Resident physician
Charité - Berlin University of Medicine
The AI for Good Global Summit is the leading action-oriented United Nations platform promoting AI to advance health, climate, gender, inclusive prosperity, sustainable infrastructure, and other global development priorities. AI for Good is organized by the International Telecommunication Union (ITU) – the UN specialized agency for information and communication technology – in partnership with 40 UN sister agencies and co-convened with the government of Switzerland.
Join the Neural Network!
The AI for Good networking community platform powered by AI.
Designed to help users build connections with innovators and experts, link innovative ideas with social impact opportunities, and bring the community together to advance the SDGs using AI.
Watch the latest #AIforGood videos!
Stay updated and join our weekly AI for Good newsletter:
Check out the latest AI for Good news:
Explore the AI for Good blog:
Connect on our social media:
Disclaimer:
The views and opinions expressed are those of the panelists and do not reflect the official policy of the ITU.