Magical Way of Self-Training and Task Augmentation for NLP Models

preview_player
Показать описание
A super cool method that improve model accuracy drastically without using additional task-specific annotated data

Connect

0:00 - Intro
3:07 - Task augmentation + self-training
5:13 - Intermediate fine-tuning
6:09 - Task augmentation setup
10:49 - Overgeneration & filtering
12:17 - Self-training algorithm
16:15 - Results
20:23 - My thoughts

STraTA: Self-Training with Task Augmentation for Better Few-shot Learning

Abstract
Despite their recent successes in tackling many
NLP tasks, large-scale pre-trained language
models do not perform as well in few-shot settings where only a handful of training examples are available. To address this shortcoming, we propose STraTA, which stands for Self-Training with Task Augmentation, an approach that builds on two key ideas for effective leverage of unlabeled data. First, STraTA uses task augmentation, a novel technique
that synthesizes a large amount of data for auxiliary-task fine-tuning from target-task unlabeled texts. Second, STraTA performs selftraining by further fine-tuning the strong base model created by task augmentation on a broad distribution of pseudo-labeled data. Our experiments demonstrate that STraTA can substantially improve sample efficiency across 12 fewshot benchmarks. Remarkably, on the SST-2 sentiment dataset, STraTA, with only 8 training examples per class, achieves comparable results to standard fine-tuning with 67K training examples. Our analyses reveal that task augmentation and self-training are both complementary and independently effective.
Рекомендации по теме
Комментарии
Автор

Great video! Looking forward to the next one

valthorhalldorsson