Regularization in Deep Learning | How it solves Overfitting ?

preview_player
Показать описание
Regularization in Deep Learning is very important to overcome overfitting. When your training accuracy is very high, but test accuracy is very low, the model highly overfits the training dataset set and struggle to make good predictions on test dataset.

Overfitting in Deep Learning can be the result of having a very deep neural network or high number of neurons. And the technique to reduce the number of neurons or nullify the effect of certain neurons is called Regularization.

With Regularization in Deep Learning, we nullifying the effect of certain neurons, and thus, we create a simple network that will generate decision boundary that fits well in both training as well as test dataset.

If our model is not overfitting, then we need not use Regularization. But when our model is overfitting, only then we use Regularization.

➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖

➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖

Timestamps:
0:00 The Problem
0:56 Overfitting in Deep Learning
2:35 Overfitting in Linear Regression
3:39 Regularization Definition

➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖

This is Your Lane to Machine Learning ⭐

➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖

Рекомендации по теме
Комментарии
Автор

If you found this video helpful, then hit the *_like_* button👍, and don't forget to *_subscribe_* ▶ to my channel as I upload a new Machine Learning Tutorial every week.

CodingLane
Автор

how could you learn these concepts so well and you explain them so well? Which resources did you use? it would be so nice if you also make a video about the way of learning deep learning and AI. Thank you for your good explanation and videos!

A.K_
Автор

Amazing stuff with an apt explanation👏👏👏

shtakshiupadhyay
Автор

Self notes ...
Nullify the effect to certain neurons or parameters,
Increases linearity in model

maaniksharma
visit shbcf.ru