Lesson 4: Practical Deep Learning for Coders

preview_player
Показать описание
COLLABORATIVE FILTERING, EMBEDDINGS, AND MORE

When we ran this class at the Data Institute, we asked what students were having the most trouble understanding, and one of the answers that kept coming up was “convolutions”. So, we start with a detailed look at the convolution operation, by implementing a couple of convolutional layers and filters in a spreadsheet. Next up, we give SGD (and modern accelerated variants) the same treatment. Once you’ve seen how easy accelerated SGD methods are, try reading the original papers—notice how even the most complex deep learning papers tend to look simple once you’ve digested and implemented them?

Then we look further into avoiding over-fitting, and learn a clever trick for datasets where you have a lot more unlabeled data than labeled data (that is, semi-supervized learning scenarios): “pseudo-labeling” and “knowledge distillation”.

Finally, we move away from computer vision for the first time, to discussion recommendation systems, and in particular, collaborative filtering techniques. This is both a useful technique of itself, and will also be a great introduction to embeddings, which is going to be critical when we learn about natural language processing in the next lesson.
Рекомендации по теме
Комментарии
Автор

0:00 - CNN review (excel)
11:28 - SGD (excel)
11:43 - CNN/SGD Q&A
26:31 - Visualizing SGD in 2D and 3D
28:53 - Visualizing and explaining Momentum in 3D
32:20 - Momentum
34:35 - Dynamic Learning Rates and Adagrad
41:15 - RMSprop
46:14 - Adam
49:00 - Eve
53:52 - Jeremy's approach to automatic learning rate annealing
56:57 - Jeremy's solution to Kaggle's "State Farm Distracted Driver Detection"
1:22:50 - Introduction to Semi-Supervised Learning
1:23:45 - Pseudo-Labeling
1:25:35 - Jeremy's Kaggle solution Q&A
1:36:01 - Collaborative Filtering
1:51:45 - Collaborative Filtering Q&A
1:58:26 - Collaborative Filtering (continued)

MatthewKleinsmith
Автор

The explanation of gradient descent starting from 16:21 is absolutely genius in its simplicity. Thanks so much for this course, Jeremy.

lextmb
Автор

The single greatest explanation of SGD on YouTube.

dimitriyzyunkin
Автор

It just feels awesome to see how Jeremy pour out his ideas and easily reach state out art in Kaggle competition or other machine learning field. A great inspiration! Thank you for sharing these out!

puvrhef
Автор

The value "0.7959" in your result is MSE not RMSE; it's RMSE is 0.89, which is approximately the state-of-the-art as you yourself report.

kamalb
Автор

Thank you very much for making this so simple to understand.

psoma
Автор

nice tutorial! Is it possible to get the spreadsheet somehow ?

hendrik
Автор

3:36 I think it is 2 row less than the starting. like 28*28 will become 26*26

probhakarsarkar
Автор

where can i get the excel file, thanks a lot

sorin
Автор

where can i get this excel sheet used in the video?

abrarshaikh
Автор

Do you have the excel spreadsheet so I can look at the equations?

KevinCrosbySeattle