Machine Learning 1: Lesson 10

preview_player
Показать описание
In today's lesson we'll further develop our NLP model by combining the strengths of naive bayes and logistic regression together, creating the hybrid "NB-SVM" model, which is a very strong baseline for text classification.

To do this, we'll create a new `nn.Module` class in pytorch, and look at what it's doing behind the scenes.

In the second half of the lesson we'll start our study of tabular and relational data using deep learning, by looking at the "Rossmann" Kaggle competition dataset. Today, we'll start down the feature engineering path on this interesting dataset.

We'll look at continuous vs categorical variables, and what kinds of feature engineering can be done for each, with a particular focus on using embedding matrices for categorical variables.
Рекомендации по теме
Комментарии
Автор

You know what? I think these people who share their knowledge are very lovely. Think you!

huli
Автор

This explanation is something everyone should know! This is more like a really awesome hello world example of nlp + ml. Great video! Thanks

__snehal__
Автор

Thanks for the clear explanations! I love your videos and notebooks!
I believe that in cell 102 in the calculation of r, it should be:
np.log(p/y.sum() / q/(1-y).sum())

shlomed
Автор

You're a good instructor. Thanks.

lizravenwood
Автор

this is a document term matrix as terms are represented as columns

mrinalsbc
Автор

Can someone explain the magic of Naive Bayes combined with Logistic Regression? Why results are better when we multiply term document matrix by r, before fitting the model?

egorepishin
Автор

Awesome video! I'm looking for the next one? Is it out yet?

tylerlanigan
Автор

hello dear thanks a lot of video how can i cantact to you?

mr.roboter