FixMatch

preview_player
Показать описание
PLEASE NOTE CORRECTIONS FOR THIS VIDEO:
The algorithm walkthrough at 5:02 describes a batch size for a given training step, not the overall size of available data!
The hyper-parameters tested in the ablations such as optimizer choice and parameters, learning rate decay, weight decay etc. generalize very well across CIFAR-10, CIFAR-100, SVHN, STL-10, and ImageNet. If you are trying this out for yourself, the hyper-parameter recommendations from the paper should work well without much AutoML / tuning!

This video explains a new algorithm from Google AI for Semi-Supervised Learning! FixMatch uses Consistency Regularization and Pseudo Labeling to achieve about 95% and 89% accuracy when using 250 and 40 labels from CIFAR-10, respectively. Thanks for watching! Please subscribe!

Рекомендации по теме
Комментарии
Автор

1:00 Intro
1:30 Main Idea Bullet Points
2:30 FixMatch vs. Supervised Learning
3:00 Overview of FixMatch
5:00 Algorithm Walkthrough
6:50 Consistency Regularization
8:00 Pseudo-Labeling
9:00 Curriculum “for free”
10:00 Results
11:23 Consistency Across Runs
11:58 1 Sample Per Class
12:50 Ablations
16:38 Hyperparameters of FixMatch

connor-shorten
Автор

Awesome! Thanks a lot.
For a long time ago I've wanted to try pseudo-labeling, I didn't know it was called like this, I am very happy this kind of research is being done because it seems super useful in practice.

CristianGarcia
Автор

5:04, line 7, If I understand correctly, then H arguments should be e_{argmax(q_b)}, and p(y|\alpha(\tilde{u});\theta), where e_i is one-hot-vector of i.

Batu
Автор

Thanks, great overview!
Regarding cutout - they seem to use constant filling with 0.5 values - I've checked their CT-Augment implementation:

janrocketman