Reliable Interpretability - Explaining AI Model Predictions | Sara Hooker @PyBay2018

preview_player
Показать описание

Abstract

How can we explain how deep neural networks arrive at decisions? Feature representation is complex and to the human eye opaque; instead a set of interpretability tools intuit what the model has learned by looking at what inputs it pays attention to. This talk will introduce some of the challenges associated with identifying these salient inputs and discuss desirable properties methods should fulfill in order to build trust between humans and algorithm.

Speaker Bio

Sara Hooker is a researcher at Google Brain doing deep learning research on reliable explanations of model predictions for black-box models. Her main research interests gravitate towards interpretability, model compression and security. In 2014, she founded Delta Analytics, a non-profit dedicated to bringing technical capacity to help non-profits across the world use machine learning for good. She grew up in Africa, in Mozambique, Lesotho, Swaziland, South Africa, and Kenya. Her family now lives in Monrovia, Liberia.

This and other PyBay2018 videos are brought to you by our Gold Sponsor Cisco!
Рекомендации по теме
Комментарии
Автор

Very thought provoking! Thanks for sharing

-beee-
Автор

Great talk. Just a wild thought, what if complexity of interpretibility convinces us to build models in a cycle like (train a layer, understand, train next layer, understand last two layers, train next layer

janzaibmbaloch