Regularization in a Neural Network | Dealing with overfitting

preview_player
Показать описание
We're back with another deep learning explained series videos. In this video, we will learn about regularization. Regularization is a common technique that is used to deal with overfitting. But how it works and why it helps with overfitting is sometimes hard to understand.

Get your free speech-to-text API token 👇

We go over the techniques of regularization such as L1, L2 and Dropout regularization, learn the underlying logic of regularization and understand the connection between this technique and neural networks.

00:00 Introduction
00:35 The purpose of regularization
02:54 How regularization works
05:01 L1 and L2 regularization
07:29 Dropout regularization
09:13 Early-stopping
10:03 Data augmentation
11:18 Get your Free AssemblyAI API link now!
Рекомендации по теме
Комментарии
Автор

I swear this playlist is one of the best resources I have ever seen on these topics. Great explanation. Please continue to upload more of this great content. Much thanks for your time and outstanding effort.

mmostafa
Автор

Just wanted to say that these videos are really well done and the speaker really knows what she's talking about. Iam doing my PhD right now in mechanical engineering, using deep learning for modeling a production process (steel) and your videos really helped me to get a much better grips on what to tune and do with my model. Highly appreciated, thx a lot :)!

nrvzhzn
Автор

Another absolutely fantastic, accessible teaching resource on a complex machine learning concept. I don't think there are any resources out there that can match the quality, accessibility and clarity this resource provides.

mmacaulay
Автор

damn this whole series is like a gold mine ... i was suspicious of how a so well know topic be covered in so less time ... the videos might be not good but happy to be proven the wrong. THESE ARE GOLD ... thank you @AssemblyAI & thank you very much Ma'am for helping.

harshitvijay
Автор

Wow so useful, thank you for the amazing content. Your can feel the confidence of the lecturer and her explanations are very clear. Watching all the playlist

AlexKashie
Автор

Overfitting is frequently happening in my programs. I tried reducing the number of input parameters but I know it is not a good solution. I was familiar with L1 and L2 regularisation. This tutorial helped me better understanding of them and other common methods. I tried to decrease both the train and test errors but I was not successful by the use of regularisation. I hope to do it soon 🙂 Thanks for your illustrative explanations

MrPioneer
Автор

I'm a Research scholar from India, your videos are just awesome 👍

radethebookreader
Автор

Really to the point and excellently delivered.

AlexXPandian
Автор

Thank you for this explanation. Like many I’d imagine, I’ve bumped into these concepts predominately via my use of SD. It’s nice having an overview of what’s being conveyed so I can understand what’s happening without getting too bogged down in the minutiae.

Eyetrauma
Автор

I had tears in my eyes. absolute gem of a video.

FirstLast-txcw
Автор

Brief yet very clear and informative. Thank you.

arminkashani
Автор

Great job. The explanation is very clear and easy to understand.

jacobyoung
Автор

Your videos are so good keep up the good work I have read and watched a lot of content explain it yours is the best

Ali-Aljufairi
Автор

This is amazing series with concepts well explained. Lot of the other videos dwelve a lot on mathematical formulas without explaining the concepts.

rohitkulkarni
Автор

the best video with a clear explanation

mrbroos
Автор

Thank you! This gave a good intro b4 I started reading Ian Goodfellow.

bellion
Автор

Great playlist, the contents are on the spot to each topic in a minimum time. Please keep outstanding work.🤘and thanks for the content.

malithdesilva
Автор

I liked this video very much. You explained all these techniques very well in my opinion.. Thank you..

ferdaozdemir
Автор

Very clear and precise explanation. Thanks :)

aryanmalewar
Автор

This is very, very helpful. Great explanation Thank you.

jabessomane