CVPR18: Tutorial: Part 1: Interpretable Machine Learning for Computer Vision

preview_player
Показать описание
Organizers: Bolei Zhou
Laurens van der Maaten
Been Kim
Andrea Vedaldi

Description: Complex machine learning models such as deep convolutional neural networks and recursive neural networks have made great progress in a wide range of computer vision applications, such as object/scene recognition, image caption-ing, visual question answering. But they are often perceived as black-boxes. As models are going deeper in search of better recognition accuracy, it becomes even harder to understand the predictions given by the models and why. This tutorial is to broadly engage the computer vision community with the topic of interpretability and explainability in models used in comput-er vision. We will introduce the definition of interpretability and why it is important, and have a review on visualization and interpretation methodologies for analyzing both the data and the models in computer vision.
Schedule:
1400 Welcome & Overview
1410 Introduction to Interpretable Machine Learning, Been Kim
1450 Dos and Don'ts of Using t-SNE to Understand Vision Models, Laurens van der Maaten
1530 Afternoon Break
1615 Revisiting the Importance of Single Units in Deep Networks, Bolei Zhou
1655 Understanding Deep Networks Using Natural Pre-Images, Meaningful Perturbations, and Vector Embeddings, Andrea Vedaldi
Рекомендации по теме