Backpropagation Explained

preview_player
Показать описание
The most popular optimization strategy in machine learning is called gradient descent. When gradient descent is applied to neural networks, its called back-propagation. In this video, i'll use analogies, animations, equations, and code to give you an in-depth understanding of this technique. Once you feel comfortable with back-propagation, everything else becomes easier. It uses calculus to help us update our machine learning models. Enjoy!

Code for this video:

Please Subscribe! And like. And comment. That's what keeps me going.

Want more education? Connect with me here:

This video is apart of my Machine Learning Journey course:

More learning resources:

Join us in the Wizards Slack channel:

Sign up for the next course at The School of AI:

And please support me on Patreon:
Signup for my newsletter for exciting updates in the field of AI:
Рекомендации по теме
Комментарии
Автор

At 8:40 I think it is suppose to say something like
Step 2: weights x inputs + bias
step 3: calculate loss (mean squared error)
step 4: find partial derivative for all weights
step 5: calculate optimization direction on computational graph
step 6: take step towards minima / optimized weights

RecursiveRuminations
Автор

Finally. It's 4 AM in India and I am out of bed waiting for this video.

subhajitdas
Автор

7:54 I don't think that backpropagation is a rename of gradient descent.
It's more like backprop followed by gradient descent.
Backprop finds gradient and then gradient descent applies "weight = weight - gradient*learning_rate"
So gradient descent only does the weight update. It doesn't need to find the gradient. The gradient is provided by backpropagation.

offchan
Автор

This was SO helpful, concise video packed with so many concepts I needed to understand before I could understand the back propagation.

muktasane
Автор

8:35 I feel like this is a mistake in the video. Everything is Step 1 Random Initialization.

brendanhansknecht
Автор

Rap album of ML in cheatsheet form would be a worthy theme track to overplay. As learning styles vary, sometimes going meta (which at the heart is to link use cases with concepts) can help the user differentiate the scope of tree and forest. Much love for the content and the wizard community!<3

dancingwithdestiny
Автор

If you can’t explain it simply, you don’t understand it well enough. You, sir understand it way better than others 🔥🔥🔥

MayankArora
Автор

Maybe a quick mention of the chain rule would have also been nice, because that plays a big part in backpropigation.

Cool video though, it is sort of a quick recap of the course I did last month.

FuZZbaLLbee
Автор

Its incredible to see how many effort you put in these videos, you deserves more subs, non just because your'e hella funny, but also because I can feel that your goal is not make views and money as the other youtubers, your'e teaching me machine learning but Im learning something more from you, respect from Italy 🇮🇹

dariocardajoli
Автор

That Rap in the beginning though. Kind of summary of the whole video xD

aniketbanginwar
Автор

Would love to see a video about using docker. Keep it up siraj!

akrylic_
Автор

Great video as always man! Been a fan since your early days, your videos got me into ML and now I have 2 papers on Quantum Machine Learning on the arXiv. Just wanted to let you know of your positive impact. Keep up the great work educating the world!

GuillaumeVerdonA
Автор

it really shaped my thoughts!!
Thanks for this

decode
Автор

Hi! Great video; but maybe you could have spent more time on the back propagation schema and explain the steps one by one, it seemed to me like the most interessant part, but I still haven't understand it even after the video...

hugoropensourceai
Автор

Finally your videos make sense, great work, your progress is my ease of learning . Looking forward to the next one. Aren't quantum computers of youre do optimization particularly well ?

bestintentions
Автор

so great explanation, could give me a simple definition for the Back propagation

amirabouamrane
Автор

Waking up to this <3 it's going to be a good day!

pallavirana
Автор

Great video, I would of liked it more if there was actually an backprop example with a very simple neural net, but I guess there isn't enough space for that. Try to record the UCLA lecture if possible.

empiricistsacademy
Автор

Thank you for your excellent video and perfect english

denisbaranoff
Автор

This is an excelent video, well explained! Thanks!

luisxd