filmov
tv
Lesson 4: Deep Learning 2019 - NLP; Tabular data; Collaborative filtering; Embeddings
Показать описание
The basic steps are:
1. Create (or, preferred, download a pre-trained) *language model* trained on a large corpus such as Wikipedia (a "language model" is any model that learns to predict the next word of a sentence)
2. Fine-tune this language model using your *target corpus* (in this case, IMDb movie reviews)
3. Remove the *encoder* in this fine tuned language model, and replace it with a *classifier*. Then fine-tune this model for the final classification task (in this case, sentiment analysis).
After our journey into NLP, we'll complete our practical applications for Practical Deep Learning for Coders by covering tabular data (such as spreadsheets and database tables), and collaborative filtering (recommendation systems).
Then we'll see how collaborative filtering models can be built using similar ideas to those for tabular data, but with some special tricks to get both higher accuracy and more informative model interpretation.
This brings us to the half-way point of the course, where we have looked at how to build and interpret models in each of these key application areas:
- Computer vision
- NLP
- Tabular
- Collaborative filtering
For the second half of the course, we'll learn about *how* these models really work, and how to create them ourselves from scratch. For this lesson, we'll put together some of the key pieces we've touched on so far:
- Activations
- Parameters
- Layers (affine and non-linear)
- Loss function.
We'll be coming back to each of these in lots more detail during the remaining lessons. We'll also learn about a type of layer that is important for NLP, collaborative filtering, and tabular models: the *embedding layer*. As we'll discover, an "embedding" is simply a computational shortcut for a particular type of matrix multiplication (a multiplication by a *one-hot encoded* matrix).
Комментарии