Stanford CS224N: NLP with Deep Learning | Winter 2019 | Lecture 6 – Language Models and RNNs

preview_player
Показать описание

Professor Christopher Manning & PhD Candidate Abigail See, Stanford University

Professor Christopher Manning
Thomas M. Siebel Professor in Machine Learning, Professor of Linguistics and of Computer Science
Director, Stanford Artificial Intelligence Laboratory (SAIL)


0:00 Introduction
0:33 Overview
2:50 You use Language Models every day!
5:36 n-gram Language Models: Example
10:12 Sparsity Problems with n-gram Language Models
10:58 Storage Problems with n-gram Language Models
11:34 n-gram Language Models in practice
12:53 Generating text with a n-gram Language Model
15:08 How to build a neural Language Model?
16:03 A fixed-window neural Language Model
20:57 Recurrent Neural Networks (RNN)
22:39 ARNN Language Model
32:51 Training a RNN Language Model
36:35 Multivariable Chain Rule
37:10 Backpropagation for RNNs: Proof sketch
41:23 Generating text with a RNN Language Model
51:39 Evaluating Language Models
53:30 RNNs have greatly improved perplexity
54:09 Why should we care about Language Modeling?
58:30 Recap
59:21 RNNs can be used for tagging
Рекомендации по теме
Комментарии
Автор

Handled the questions so well. Crisp and clear answers

kiran
Автор

her lectures are on point and very clear.

kartiksirwani
Автор

Very clear and to-the-point lecture. Better than Chris!

saeedvahidian
Автор

Why the RNNs share the same weights at each time step? I didn't understand the goal behind it.

mohammedbouri
Автор

What is the physical interpretation of the hidden state and corresponding weights matrix?

unknownhero