GPT-2: Language Models are Unsupervised Multitask Learners

preview_player
Показать описание
A look at OpenAI's new GPT-2 model and the surrounding controversy.

Abstract:
Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific datasets. We demonstrate that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText. When conditioned on a document plus questions, the answers generated by the language model reach 55 F1 on the CoQA dataset - matching or exceeding the performance of 3 out of 4 baseline systems without using the 127,000+ training examples. The capacity of the language model is essential to the success of zero-shot task transfer and increasing it improves performance in a log-linear fashion across tasks. Our largest model, GPT-2, is a 1.5B parameter Transformer that achieves state of the art results on 7 out of 8 tested language modeling datasets in a zero-shot setting but still underfits WebText. Samples from the model reflect these improvements and contain coherent paragraphs of text. These findings suggest a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations.

Authors:
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever
Рекомендации по теме
Комментарии
Автор

We need more paper talkers such as Yannic. Yes, Two Minute Papers is great but there's many papers worthy of discussion, many opinions needed, and many worthy methods of analysis.

bobsalita
Автор

Did you ever end up making a video that discusses byte pair encoding?

ben
Автор

It's such a shame that the field stagnated after this. Nothing bigger or better than GPT2. Maybe someday.

jcorey
Автор

Would be awesome if Yannic made the video on Byte Pair Encoding mentioned 18:30

eab
Автор

Great video!
Btw, is the model released now and do we have weights available?

kumarsubham
Автор

I think a neural network is essentialy a function that we can't express explicitly... The function is fine tuned and generated uaing the training data and then the said function is passed an input that we want to know the output of and since the function was fine tuned to the dataset that we gave it we can expect a prediction of output similar to the dataset.
Essentialy Nn can be used to rougly map huge pieces of data to each other and then use the mapping to obtain similar outputs for inputs whose outputs are otherwise unknown to us.

Also to check wether a given input is similar to the other inouts of our dataset we can input the input on a trained neural network and then see accuracy of neural network to compare similarity of this input to training inputs.
Thus can be used for a recommendation system like youtubes.

harmitchhabra
Автор

Thanks for sharing this video. I just found that GPT2 models will be available soon at Ainize Teachable NLP for free fine-tuning.

dongilseo
Автор

How can we say that GPT is simply not overfitting since it literally has seen so much data that now any down-stream task would already have been covered in the training dataset?

ambujmittal
Автор

Is there a good video that explains how transformers work?

orjihvy
Автор

7:10 Got me rolling on the floor laughing

mannacharya
Автор

First ten minutes no substance - don’t have more time to waste here

Xnaarkhoo
visit shbcf.ru