Self-Damaging Contrastive Learning Explained!

preview_player
Показать описание
What do compressed neural networks forget? This paper shows how to utilize these lessons to improve contrastive self-supervised learning and representation learning of minority examples in unlabeled datasets!

Paper Links:

Chapters
0:00 Paper Title
0:03 What Do Compressed Networks Forget?
2:04 Long-Tail of Unlabeled Data
2:43 SDCLR Algorithm Overview
4:40 Experiments
9:00 Interesting Improvement
9:25 Forgetting through Contrastive Learning
11:07 Improved Saliency Maps
11:34 The Simplicity Bias

Thanks for watching! Please Subscribe!
Рекомендации по теме
Комментарии
Автор

Love your videos. So minimal yet gives you a full walkthrough of all important pieces. It's much easier to go through a paper after watching your videos. Your presentation is also superb always.

Wish you all the best. Keep up the good work.

SaquibAhmad
Автор

Enjoyed your previous videos on contrastive learning, but didn't like this one. The explanation is too fast and hard to follow. Seems like you are reading some text without expression instead of explaining ideas and results.

ezhickezhovsky