Stanford CS224N NLP with Deep Learning | Winter 2021 | Lecture 6 - Simple and LSTM RNNs

preview_player
Показать описание

This lecture covers:
1. RNN Language Models (25min)
2. Other uses of RNNs (8 min)
3. Exploding and vanishing gradients (15 min)
4. LSTMs (20 min)
5. Bidirectional and multi-layer RNNs (12 min)

Professor Christopher Manning
Thomas M. Siebel Professor in Machine Learning, Professor of Linguistics and of Computer Science
Director, Stanford Artificial Intelligence Laboratory (SAIL)

#naturallanguageprocessing #deeplearning
Рекомендации по теме
Комментарии
Автор

I can't thank Stanford enough for making the videos public. The quality of these lectures are just out of the world!

sheikhshafayat
Автор

This is the greatest illustration of LSTM I have ever watched.

ziangxu
Автор

I have returned to this lecture after studying a little bit more. Now I get how well the professor covers all the materials.

nanunsaram
Автор

I don’t have enough words to thank you for this course.

avivjan
Автор

this is realy amazing! I thank you very much for the positve sharing of your Skills.

mikegher
Автор

Regarding vanishing gradients: h are not parameters - W are. Right? And for W, those gradients sum up, i.e., no vanishing. It seems unclear why dJ/dh should be important. Aren't we only interesting in updating W?

Epistemophilos
Автор

46:03 Exploding Gradient
1:16:34 Bidirectional and Multi-layer RNNs

jens
Автор

1:16:40 Bidirectional and multi-layer RNNs

nanunsaram
Автор

This is great, this course is good for understanding basics of nlp and best methods, but why I never heard of comparing actual output with predicted output while talking about reducing the error did i miss something?

meherprudhvi
Автор

at 41:00 why is del J / del h even needed?

xyzabs