filmov
tv
Stanford Seminar - ML Explainability Part 2 I Inherently Interpretable Models

Показать описание
Professor Hima Lakkaraju presents some of the latest advancements in machine learning models that are inherently interpretable such as rule-based models, risk scores, generalized additive models and prototype based models.
#machinelearning
0:00 Introduction
0:06 Inherently Interpretable Models
4:41 Bayesian Rule Lists: Generative Model
8:39 Pre-mined Antecedents
11:09 Interpretable Decision Sets: Desiderata
12:16 IDS: Objective Function
15:47 IDS: Optimization Procedure
16:46 Risk Scores: Examples
18:59 Objective function to learn risk scores
21:54 Generalized Additive Models (GAMs)
23:00 Formulation and Characteristics of GAMS
30:04 Prototype Selection for Interpretable Classification
34:00 Prototype Layers in Deep Learning Models
40:01 Attention Layers in Deep Learning Models
#machinelearning
0:00 Introduction
0:06 Inherently Interpretable Models
4:41 Bayesian Rule Lists: Generative Model
8:39 Pre-mined Antecedents
11:09 Interpretable Decision Sets: Desiderata
12:16 IDS: Objective Function
15:47 IDS: Optimization Procedure
16:46 Risk Scores: Examples
18:59 Objective function to learn risk scores
21:54 Generalized Additive Models (GAMs)
23:00 Formulation and Characteristics of GAMS
30:04 Prototype Selection for Interpretable Classification
34:00 Prototype Layers in Deep Learning Models
40:01 Attention Layers in Deep Learning Models
Stanford Seminar - ML Explainability Part 1 I Overview and Motivation for Explainability
Stanford Seminar - ML Explainability Part 5 I Future of Model Understanding
Stanford Seminar - ML Explainability Part 3 I Post hoc Explanation Methods
Stanford Seminar - ML Explainability Part 4 I Evaluating Model Interpretations/Explanations
Stanford Seminar - ML Explainability Part 2 I Inherently Interpretable Models
Stanford Seminar - How can you trust machine learning? Carlos Guestrin
Distributed ML with H2O feat. Erin LeDell | Stanford MLSys Seminar Episode 23
MedAI Session 22: Explainable AI - from generalities to time series | Jonathan Crabbé
Interpretable vs Explainable Machine Learning
AWS re:Invent 2020: Interpretability and explainability in machine learning
Explainable AI explained! | #1 Introduction
Stanford Seminar - Democratizing Robot Learning
Bridging Models and Data feat. Willem Pienaar | Stanford MLSys Seminar Episode 32
What can Data-Centric AI Learn from Data and ML Engineering? - Alkis Polyzotis | Stanford MLSys #65
The Missing Link in ML Infrastructure feat. Josh Tobin | Stanford MLSys Seminar Episode 11
Causal AI for Systems feat. Pooyan Jamshidi | Stanford MLSys Seminar Episode 38
MFML 017 - Explainability and AI
Model explainability - Idan Angel - PyCon Israel 2019
Stanford MLSys Seminar Episode 2: Matei Zaharia
Machine Learning Community Standup - Model Explainability
Carlos Guestrin: How Can You Trust Machine Learning?
Stanford Seminar - Human-AI Interaction Under Societal Disagreement
Interpreting ML models with explainable AI
MedAI #34: Optimizing for Interpretability in Deep Neural Networks | Mike Wu
Комментарии