Regularization in Machine Learning | L1 & L2 Regularization

preview_player
Показать описание
Regularization is popular technique to avoid overfitting of models. What is done in regularization is that we add sum of the weights of the estimates to the cost function. So when we minimize the cost function we ensure that we minimize the sum of the weights.So no parameters gets higher values. Moreover, parameters for variables or factors that are less important get a value of 0, which is to say these variables play no role in the final model. So regularization can also be considered as a automated feature selection techniques as un important variables get removed from the model.

There are two types of such techniques L1 & L2, based on the way we add the weighs or the parameters to the cost function.

We also call them as Lasso & Ridge Regression model based on whether we are using L1 or L2 regularization.

Coursera :

Recommended Data Science Books on Amazon :

20% discounts on below live courses : use coupon YOUTUBE20
Data Science Live Training :
Рекомендации по теме
Комментарии
Автор

at begining i thought speed is set at 1.5x

shubhamkapoor
Автор

You have presented the topic well. Just to summarize:
1. Regularization adds the new value lambda * (B)^p to PENALIZE the model for choosing higher values for weights.
2. In L1 ( also called as LASSO Regression p=1) you try to set coefficients for all the insignificant features to 0 This clearly avoids overfitting.
3. In L2 (also called as Ridge Regression p=2) you add SQUARED magnitude so you heavily penalize the greater value weights and encourages the model to not overfit.

I still haven't found any conclusive answer on how to choose values for lambda. If anyone has anything on that please let me know.

pgdify