Lesson 6: Deep Learning 2018

preview_player
Показать описание

Today is a very busy lesson! We first learn how to interpret the collaborative filtering embeddings we created last week, and use that knowledge to answer the question: "what is the worst movie of all time?"

Then we cover what is perhaps the most practically important topic in the whole course: how to use deep learning to model "structured data" such as database tables and spreadsheets, as well as time series. It turns out that not only is deep learning often the most accurate modeling approach for this tasks, it can be the easiest approach to develop too.

We close out the lesson with an introduction to recurrent neural networks (RNNs), and use an RNN to write a new philosophical treatise...
Рекомендации по теме
Комментарии
Автор

Every video lecture of this course fascinates me. It is amazing how things can be explained so simply.

rainfeedermusic
Автор

I love how at 30:52 Jeremy, an Australian, points at the months Mar - May and says "Here is winter" and then points at Oct - Dec and says "and here is summer". Got me confused for a minute.

Cruzzzzz
Автор

Foremost I send a great big thanks to Jeremy for putting in such effort into his software and his sharing of his own great understanding of ML and DL with the videos he's produced. Please keep it up dude! Secondly though, Jeremy's not clear enough about the purpose of the network he diagrammed at 1h20m. He's mystifying the audience with the two inputs coming in at different levels in the network. Is an LSTM module -- not a full network -- being developed here with the "history input" being "char 2 input" and the "main input" being "char 1 input"? He didn't discuss "history" of LSTMs or GRUs yet but thanks to other courses, I've been trained on RNNs already and at any rate, it seems like this might be where the lecture is heading. It's very rare for complete networks -- as opposed to partial modules like LSTMs or resnet modules -- to receive inputs at two different layers because normally there is just one input vector per training example. Can anyone elaborate on "what" and "why" for the diagrammed network?

geoffreyanderson
Автор

What are the other ways to interpret these entity embeddings? One another way i have used is clustering them, so we get cluster of features

atouchoash
Автор

@Jeremy Howard -- What is the "Machine Learning course" that you keep referring to?

kevalan
Автор

I spot that the color of the arrow in legend is much lighter and clearer than the one on the graph for every graph in the RNN section. It would be better if the color of the arrow is consistent between the legend and the graph.

Anyway, excellent job in presenting the main idea of RNN xD.

dogoaurora