Jeremy Howard on Platform.ai and Fast.ai (Full Stack Deep Learning - March 2019)

preview_player
Показать описание
New course announcement ✨

We're teaching an in-person LLM bootcamp in the SF Bay Area on November 14, 2023. Come join us if you want to see the most up-to-date materials building LLM-powered products and learn in a hands-on environment.

Hope to see some of you there!

Рекомендации по теме
Комментарии
Автор

0:49 intro to fast.ai
1:53 intro to AugmenML vs AutoML
6:19 intro to Platform.ai
14:11 intro to fellowship.ai
15:17 Learning rate finder
17:52 fastai library default Hyperparameters works great in most cases.
19:43 fast.ai students' works after 1st week (using Default Hyperparameters, Transfer Learning, Learning Finder)
22:28 Data Augmentation in fastai library (Test Time Augmentation)
25:09 Progressive Resizing
27:40 Heatmaps works to see what's going on
28:34 1Cycle is a big time-saver
30:16 on the importance of looking at experimentalist result to find the theory behind it
32:19 Faster learning with Gradual unfreezing and Discriminative learning rates
34:13 RNN super-convergence AdamW (Decoupled Weight Decay Regularization)
35:50 Clipping grads or Annealing Adam's epsilon
38:13 Q&A
38:23 Size independent networks in fastai using Adaptive average pooling layer.
40:40 TensorFlow VS PyTorch
41:55 fastai Pytorch code in production - fastEC2

AliAPanahi
Автор

Ouch, he just shredded Google's AI effort. Having unlimited resource is always a curse.

minhongz
Автор

I wonder how fast.ai earn money? if them keep tutorials free, how could them make it a lifelong work to do it ?

longliangqu