Negative Data Augmentation

preview_player
Показать описание
This video explains Negative Data Augmentation, a strategy for using label-corrupting, rather than label-preserving transformations in Deep Learning. The authors test this framework for training GANs and for Contrastive Learning such as CPC and MoCo. I think this is a really exciting direction for Data Augmentation and overcoming the challenge of learning from limited labeled data, I hope you find this video useful!

Content Links:

Chapters
0:00 Beginning
0:55 Semantically-Preserving Transformations
1:44 OOD Augmentations
3:25 NDA Strategy
6:00 Over-Generalization
7:18 Integration in GANs
8:00 Integration in Contrastive Learning
8:40 GAN Results
11:00 Contrastive Learning Results
12:18 Dark Matter - Energy-Based Learning
13:30 The Diff that makes a Diff
Рекомендации по теме
Комментарии
Автор

I can see this having a benefit in music generation - shuffling parts of the song around or splicing to other songs...

terryr
Автор

Good to hear u back on regular uploads :)

dawwdd
Автор

great idea... it resembles the gradient reversal idea of DANN.

nikre
Автор

Interesting. But I'm not sure it's what we need. I mean, is it wrong to classify an image as "dog" when there's only half his face on the image? What I see in the dog example is 4 sub-images, and each *should* be labelled "dog" if tested independently.
My point is actually related to the concept of equivariance (as opposed to invariance).

Maybe the jigsaw transformation should have more than 4 parts to really destroy the high-level structure?

Ceelvain
Автор

I m interested if someone know where i could find some code for those augmentations :)

yoannfleytoux
Автор

I am curious if anyone has any ideas how to add these negative examples to a VAE (something like Jukebox)

terryr