Rethinking Pre-training and Self-Training

preview_player
Показать описание
**ERRATA** at 9:31 I called the large scale jittering "color jittering", this isn't an operation specifically on colors.

This video explores an interesting paper from researchers at Google AI. They show that self-training outperforms supervised or self-supervised (SimCLR) pre-training. The video explains what self-training is and how all these methods attempt to utilize extra data (labeled or not) for better performance on downstream tasks.

Thanks for watching! Please Subscribe!

Paper Links:
Рекомендации по теме
Комментарии
Автор

Nice! Thank you for covering this Connor!

DistortedV
Автор

1:19 How to use Extra Data?
3:08 Self-Training Algorithm
5:19 Examples of Pseudo-Labels (Semantic Segmentation)
5:48 Comparison with Supervised and Self-Supervised Pre-training
7:30 Feature Backbones for Object Detection
8:40 Experiments
(To be continued)

connor-shorten
Автор

Note:
1. Self-training is more robust and performs better for several downstream tasks
2. Pretraining still yields acceptable performance. With 1.3x-8x faster
3. Random initialization has the best performance.

Jacky
Автор

Great video! Congrats! Can you make more videos on semantic segmentation? Thanks for all your videos, they are awesome!

philborba
Автор

Could you elaborate on the part where you discussed self-training and meta pseudo- labels?

sayakpaul
Автор

Dude talk a little slower. Had to slow down the vid. Great work but go slow. Long videos are ok.

faizanahemad