Lesson 2: Practical Deep Learning for Coders

preview_player
Показать описание
CONVOLUTIONAL NEURAL NETWORKS

For last week’s assignment your goal was to get into the top 50% of the Kaggle Dogs v Cats competition. This week, Jeremy shows his answer to this assignment. It’s a good idea to spend a few hours giving the assignment your best shot, prior to watching this lesson, since it’s the process of trying, failing, and trying again that is the basis of learning the practical skills needed to be a deep learning practitioner.

After showing how to submit a successful entry to this competition, we then learn some critical information about the loss function most commonly used for classification projects, as well as seeing how to use visualization to understand where your model is succeeding and failing.

In the second half of the lesson, we dig into the details of CNNs and fine-tuning. We start by discussing why we normally want to start with a pre-trained network, rather than starting with random weights, and see how fine-tuning keeps those layers that contain useful features for our model, and updates the weights of those layers that are less suitable. We develop an understanding of how and why fine-tuning works, including learning about three of the key foundations of neural networks:

* Dense (or “fully connected”) layers
* Stochastic gradient descent (SGD)
* Activation functions (or “non-linearities”)
Рекомендации по теме
Комментарии
Автор

0:09 - Teaching Approach
5:22 - How to Ask For Help (Tips)
7:10 - How to Ask For Help (Example)
8:30 - Class Resources: Wiki
9:55 - Class Resources: Forum
10:25 - Class Resources: Slack
11:20 - Class Survey
17:14 - Solution to Dogs vs Cats Redux Competition
17:30 - Downloading the Data
20:00 - Planning (Overview of Tasks)
20:25 - Preparing the Data (Validation and Training Set)
22:15 - Using Vgg16 (Finetune and Train)
22:48 - Submitting to Kaggle
30:30 - Competition Evaluation Metric: Log Loss
37:18 - Experiment: Running More Epochs
40:37 - Visualizing Results
47:37 - Introducing the Kaggle State Farm Competition
50:29 - Question: Will ImageNet Finetuning Approach work for CT Scans?
53:10 - Lesson 0 Video, Convolutions
54:09 - Why do we do finetuning?
54:43 - What do CNNs learn?
1:03:30 - Deep Neural Network in Excel
1:07:54 - Initialization
1:14:08 Linear Model from Scratch
1:15:10 - Loss function
1:15:49 - Update function
1:24:40 Question: What if you don't know derivative of functions?
1:25:37 Linear Model in Keras
1:29:58 Linear Model with CNN Features for Dogs Vs Cats Redux
1:44:12 Introducing Activation Functions
1:46:51 Universal Approximation Theorem
1:48:20 Review: Vgg16 Finetuning

MatthewKleinsmith
Автор

this lesson simply opened my eyes in ML world. Jermy explains it in such a simply way that I envy so much.

xixiaofin
Автор

I got in the top 50% for lesson 1 as expected (in your video description). I had to combine the lesson 1 notebook with the redux notebook (to finish the prediction on the test batch and export to csv file for Kaggle). The test1 directory needed a dummy subdir ie 'unknown' to be added because the dogscats.zip file that was provided didn't have this (Jeremy did it correctly in this video using his redux notebook). The AWS server needs to be restarted if the GPU is not being used, as it's a bit flaky (noticeable in the notebook after running a cell). There are no utils or vgg zip files, they are individual python files. Thanks for these lessons. Looking forward to improving the model!

WillKriski
Автор

Small observation about log loss: in the dogs_cats_redux.ipynb notebook, log loss is computed using sklearn.metrics.log_loss, which according to the docs uses base e, while in Excel at 32:40 base 10 is used. I assume the competition uses base e.

altvali
Автор

Note. today I made the catsVdogs redux accuracy as 0.11174, and my ranking is 580th in kaggle. The world has changed since Jeremy's video :)

xixiaofin
Автор

How do you enable to code folding in Jupyter notebook. I really love that feature.

darshanfofadiya
Автор

Does anyone know where I can find the "Optim Tutorial" notebook @1:18:11 used in the video. I couldn't find it in the github repo.

rrejeet
Автор

amazing lecture. But only listening and typing the code won't help you to understand it. I had to read the source code of those functions and dig through keras documentation and other resources to understand the logic behind it. The fine tuning part is still a bit shaky for me so I probably have to go through it a couple more times to really understand the logic.

kinlam
Автор

Anyone know what css Jeremy is using for his notebooks?

Something
Автор

At around 1:18, I'm confused by "x * dy/db". Is it just a typo for "= x * dy"?

Assuming:
db = d[(y - (a * x + b)) ** 2, b]
db = 2(a * x + b - y)

da = d[(y - (a * x + b)) ** 2, a]
da = 2 * x(a x + b - y)
da = x * 2(a x + b - y)
da = x * db

jacola
Автор

Hi Jeremy, how can i get an invitation to the slack channel?

camilohjimenez
Автор

Did not follow ;( ...so many things were just skimmed and ran thru...

debuin