Lesson 6: Deep Learning 2019 - Regularization; Convolutions; Data ethics

preview_player
Показать описание
Today we discuss some powerful techniques for improving training and avoiding over-fitting:
- *Dropout*: remove activations at random during training in order to regularize the model
- *Data augmentation*: modify model inputs during training in order to effectively increase data size
- *Batch normalization*: adjust the parameterization of a model in order to make the loss surface smoother.

Next up, we'll learn all about *convolutions*, which can be thought of as a variant of matrix multiplication with tied weights, and are the operation at the heart of modern computer vision models (and, increasingly, other types of models too).

We'll use this knowledge to create a *class activated map*, which is a heat-map that shows which parts of an image were most important in making a prediction.

Finally, we'll cover a topic that many students have told us is the most interesting and surprising part of the course: data ethics. We'll learn about some of the ways in which models can go wrong, with a particular focus on *feedback loops*, why they cause problems, and how to avoid them. We'll also look at ways in which bias in data can lead to biased algorithms, and discuss questions that data scientists can and should be asking to help ensure that their work doesn't lead to unexpected negative outcomes.
Рекомендации по теме
Комментарии
Автор

I deeply enjoyed the ethics part at the end of the video.

Forest
Автор

Thank you for including the discussion on ethics and AI! This is an eye-opener.

HaithamSweity
Автор

44:14 "because the stuff that everybody talks about, generally tuns out to be not very interesting.." ....love that kind of talk; gives me faith in humanity [progress as driven by individual idiosyncrasy point of view, my favorite] :) Thanks for this course, it is the best

lkd
Автор

Why Batch Normalization actually works: 45:01

kevalan
Автор

If i do: model[0] i have a traceback: 'Resnet18_Model' object is not subscriptable. Why?

touchyto
Автор

I agree that AI researchers should think about what happens downstream of their work. However, much of it is outside of their control. If fastai is used for evil, should Jeremy be held responsible? Obviously not! You don't control how people use it, the best you can do is document the caveats of your work.

kevalan
Автор

It seems to me that in textual domain there is no lack of data for training. So there is probably no need to data augmentation.

vladimirgetselevich
Автор

1:15:06
Why does J follow H? Wasn't that supposed to be I?

jonatani
Автор

Does someone know the paper for heatmap part?

Daniel-zxub
Автор

powerful stuff Platform.ai! 2:04: "they all have to be in a single folder" - methinketh, thou protesteth too much!

lkd