Stanford CS224N - NLP w/ DL | Winter 2021 | Lecture 5 - Recurrent Neural networks (RNNs)

preview_player
Показать описание

This lecture will cover:
1. Neural dependency parsing (20 mins)
2. A bit more about neural networks (15 mins)
3. Language modeling + RNNs (45 mins)
A new NLP task: Language Modeling
A new family of neural networks: Recurrent Neural networks (RNNs)

Professor Christopher Manning
Thomas M. Siebel Professor in Machine Learning, Professor of Linguistics and of Computer Science
Director, Stanford Artificial Intelligence Laboratory (SAIL)
Рекомендации по теме
Комментарии
Автор

I'm so surprised there aren't more likes or even subscribers (given how famous Stanford is). Maybe it's because the general public was turned off by the necessary math so they tune into more "generic" and shorter one-off videos. This is a true hidden gem.

edyu
Автор

1:01:10 Storage Problem
1:02:34 n-gram model in practice
1:06:03 neural language model
1:12:00 recurrent neural networks

nanunsaram
Автор

Is there any link to the Pytorch video? :)

olicairns
Автор

12:41 here he mentions that traditional ML classifiers have a disadvantage of only being able to draw linear decision boundaries, SVM for example can be used to draw non-linear decision boundaries if used along with the kernel trick right?

mohakkhetan
Автор

1:05:55 How to build a neural Language Model
1:11:58 Recurrent Neural Network (RNN)

jens