Interpretable Deep Learning - Deep Learning in Life Sciences - Lecture 05 (Spring 2021)

preview_player
Показать описание
Deep Learning in Life Sciences - Lecture 05 - Interpretable Deep Learning (Spring 2021)
6.874/6.802/20.390/20.490/HST.506 Spring 2021 Prof. Manolis Kellis
Deep Learning in the Life Sciences / Computational Systems Biology

0:00 Lecture outline
3:08 Interpretability: definition, importance
10:30 Interpretability: ante-hoc vs. post-hoc
18:26 Interpreting models: Weight visualization
22:20 Interpreting models: Surrogate model
24:14 Interpreting models: Activation Maximization / Data generation
34:26 Interpreting models: Example-based
39:36 Interpreting decisions
42:24 Interpreting decisions: Example based
45:39 Interpreting decisions: Attribution methods
1:01:17 Interpreting decisions: Gradient based
1:08:55 Interpreting decisions: Backprop-based
1:13:23 Evaluating attributions
1:14:15 Evaluating attributions: Coherence
1:15:30 Evaluating attributions: Class sensitivity
1:16:20 Evaluating attributions: Selectivity
1:19:45 Evaluating attributions: Remove and retrain/keep and retrain
1:21:15 Lecture summary
Рекомендации по теме
Комментарии
Автор

Accidentally watching one of Manolis lectures twice: priceless

theworldsonfire.
Автор

I’ve been watching a few of this lecture series. Thank you for the incredible value you provide through these lectures!

DannyJay__
Автор

Such a nice overview of interpretable ML. I love you!!! Thanks for sharing!

何雪凝
Автор

1:13:12 what’s it called when you add backpropagation to guided backprop?

theworldsonfire.
Автор

I can’t stop seeing smoothgrad beetle as a chihuahua (or muffin)

theworldsonfire.
Автор

Just stumbled upon it by accident. Really great stuff! I see that these are focused on classification or computer vision. Any suggestions on the techniques for Reinforcement Learning (let say deep Q learning), RNNs or supervised regression problems ?

hariomt
Автор

24:44 was there multiple gooses (I know but I love it) in that image? Or did it spit that image out after only getting fed images of one goose? Why’s that image remind me of psychedelics?

theworldsonfire.
Автор

May I have access to slides? Just to save time. Otherwise I’ll just collage the screenshot I’ve been taking. But it would be nice to have full resolution slides to review.

theworldsonfire.
Автор

I don’t know what it is that we must interpret (is it a graph the machine spits out?) but I’m excited to look at it!

theworldsonfire.
Автор

One sided Parametric ReLU is somewhat known in conventional neural network research. If you had a two sided Parametric ReLU you could have a zero curvature initialization. Results from unconventional neural networks indicate faster training and less residual noise in the net (smooth decision boundries.).
You set the slopes of both sides of the two sided Parametric ReLU to 1.
You could then set one weight per neuron to 1. So data moves unchanged layer to layer. I don't think the training algorithms of conventional nets would like that.
Instead you could copy the the terms of the fast Hadamard transform into the weights of the neurons in a layer. Since the transform is self-inverse 2 layers together will leave the data unchanged.

hoaxuan
Автор

“Who’s excited to look inside brains of neural networks”



I can patch that hole in the ceiling.

theworldsonfire.
Автор

ReLU is a switch. You can make a decision tree on the switch states. Or you can have a net from input to human concepts and a net from human concepts to the wanted output. I think CLIP accidently does something like that.
Anyway AI462 Neural Networks on Google. Ankit Patel Breaking Bad on YT.

hoaxuan
Автор

Can we feed the NN pictures of math in hopes of figuring out why math feels so abrasive to some? I see the magic that math makes happen but what if there’s a smoother version out there somewhere.

theworldsonfire.
Автор

I miss you red cursor. Don’t stay gone to long.

theworldsonfire.
Автор

Mistaking a deer for a tank: no big deal (hope it didn’t have kids at home though 😬)
Mistaking a tank for a dear because the tanks outer lcd layer knew what pixels to change: big deal 😬😬

theworldsonfire.
Автор

“Who’s following what I’m saying”

(-♾, ♾)

theworldsonfire.