If We Want AI to be Interpretable, We Need to Measure Interpretability with Jordan Boyd-Graber, PhD

preview_player
Показать описание
Discover how we can transform AI from a mysterious black box into a transparent tool with interpretable metrics. In this insightful talk, Jordan Boyd-Graber, PhD, explores the necessity of measuring interpretability in AI. He introduces two novel metrics for both unsupervised and supervised AI methods. Learn about the "intruder" interpretability metric for topic models and a multi-armed bandit approach to optimize explanations in question-answering systems. Dr. Boyd-Graber also discusses the broader applications of these methods in domains such as fact-checking, translation, and web search. Don't miss this deep dive into the future of AI interpretability.

#MachineLearning #DeepLearning #NLP #NaturalLanguageProcessing #AI #ArtificialIntelligence #DataScience #DataVisualization #ReinforcementLearning #MLTraining #DataEngineering #DataViz #ODSC

Timecodes:
0:00 - Intro
1:23 - AI Should be Interpretable
12:12 - We Should Measure Interpretability
26:02 - Proposal for Unsupervised Methods (Topic Models)
31:29 - Proposal for Supervised Methods (Question Answering/Translation)

You can also follow ODSC on:
Рекомендации по теме