GPT Explained!

preview_player
Показать описание
This video explains the original GPT model, "Improving Language Understanding by Generative Pre-Training". I think the key takeaways are understanding that they use a new unlabeled text dataset that requires the pre-training language modeling to incorporate longer range context, the way that they format input representations for supervised fine-tuning, and the different NLP tasks this is evaluated on!

Paper Links:

Thanks for watching! Please Subscribe!
Рекомендации по теме
Комментарии
Автор

0:55 Semi-Supervised Learning in NLP
1:25 BooksCorpus
2:30 Fine-Tuning Loss Function
3:30 Task-Specific Input Transformations
4:25 Transformer Decoder
4:45 Natural Language Inference
5:30 Question Answering
6:10 Semantic Similarity
6:40 CoLA Text Classification
6:48 All Tasks Tested
7:38 Ablations

connor-shorten
Автор

Congratulations on 10k subscribers.
*I'd be happy if you'd do more videos on NLP* a small wish from an early subscriber.

vinayreddy
Автор

Hey, thanks for your work. Can I ask what your tools and workflow look like for making such videos?

hocusbogus
Автор

2:54 It is fine to know I have a distribution, but how do I get it there. They are not talking about it and you neither.
Is there a neural network over the last hidden state or over the whole sequence concatenated, or summed up, or what happens in the last layer? I need answers!

kyrilcouda
Автор

Nice and Quick overview of the whole paper. Thanks.. :)

RedwanKarimSony_napstar_
Автор

Didn't like the video at all. Super fast talking without explaining the important parts. Why going over datasets in a 10-min video !!??

farnooshjavadi
Автор

your videos are great, but you really need to slow down. You are talking really quickly; it's really difficult to understand what you are talking in one go.

urjadamodhar
Автор

you speak quickly; yoo much information in one minutes.

Data_scientist_trmi
Автор

Well done on not explaining this to anyone in the human race. Go tech! Fail.

DingoDin
welcome to shbcf.ru