The spelled-out intro to neural networks and backpropagation: building micrograd

preview_player
Показать описание
This is the most step-by-step spelled-out explanation of backpropagation and training of neural networks. It only assumes basic knowledge of Python and a vague recollection of calculus from high school.

Links:
- "discussion forum": nvm, use youtube comments below for now :)

Exercises:
you should now be able to complete the following google collab, good luck!:

Chapters:
00:00:00 intro
00:00:25 micrograd overview
00:08:08 derivative of a simple function with one input
00:14:12 derivative of a function with multiple inputs
00:19:09 starting the core Value object of micrograd and its visualization
00:32:10 manual backpropagation example #1: simple expression
00:51:10 preview of a single optimization step
00:52:52 manual backpropagation example #2: a neuron
01:09:02 implementing the backward function for each operation
01:17:32 implementing the backward function for a whole expression graph
01:22:28 fixing a backprop bug when one node is used multiple times
01:27:05 breaking up a tanh, exercising with more operations
01:39:31 doing the same thing but in PyTorch: comparison
01:43:55 building out a neural net library (multi-layer perceptron) in micrograd
01:51:04 creating a tiny dataset, writing the loss function
01:57:56 collecting all of the parameters of the neural net
02:01:12 doing gradient descent optimization manually, training the network
02:14:03 summary of what we learned, how to go towards modern neural nets
02:16:46 walkthrough of the full code of micrograd on github
02:21:10 real stuff: diving into PyTorch, finding their backward pass for tanh
02:24:39 conclusion
02:25:20 outtakes :)
Рекомендации по теме
Комментарии
Автор

The fact that this video is free to watch feels illegal. It really speaks volumes about Andrej. What a stunning explanation. It takes incredible skill and expertise to be able to explain such a complex topic this intuitively and simply. All I can say is thank you from the bottom of my heart that you offer videos like this for free. What an amazing man!

georgioszampoukis
Автор

Andrej, the fact that you're making videos like this is AMAZING! Thank you so much for doing this. I will be spending some quality time with this one tonight (and probably tomorrow lol) and can't wait for the next one. Thank you, thank you, thank you!

DrKnowitallKnows
Автор

This reminds me of my college courses, except it's way better in three ways: 1) Here the speaker really does know what he's talking about. 2) I can stop and rewind, get definitions, and practice myself before moving on to the next step over and over, so I can get the most out of the next step because I actually had the time to understand the last step. 3) I can do this over several days so I can keep coming back when I'm fresh and present. You are a gem and I really, really appreciate you creating this.

nyariimani
Автор

Simply stunning. I'm a 72 year old fiction writer with rudimentary computer programming skills whose son works professionally in this area. I wanted to gain a better understanding of the technology he's working with, and writes scientific papers about, and now I feel I've made a great start in that direction. Wonderful!

peterdann
Автор

When I'm confused about deep learning, I go back to this video and it calms me. It shows that there is a simple explanation waiting for someone like Andrej to show the light.

fhools
Автор

Just and FYI for those following at home. If you are getting an error at 1:54:47 you should add __radd__ into your Value class similar to __rmul__. It will allow the order of addition to not matter. I don't think it was shown in the earlier sections.

kemalatayev
Автор

Finally… someone who understands it well enough to explain it to a beginner. This is hands down the best NN video on the Internet. Thanks a ton!

robl
Автор

From github: "Potentially useful for educational purposes." What an understatement. Thank you so much for this video.

ShahafAbileah
Автор

I'm really inspired by you as an educator, and I'm very happy to see you sharing your knowledge in a lecture after a long time!

gabrieldornelles
Автор

OMG! This is the first time I've ever TRUELY understood what's actually going on when training a NN. I've tried to learn so many times, but everyone else seems to make it so unnecessarily complex. Thanks for this!

GregX
Автор

I don't understand why I understood each and every thing that Andrej explained. Such a gem of an instructor. Loved how he showed the actual implementation of tanH in the PyTorch library. This video is around 2 hours and 30 minutes long but I took 2 weeks to understand it completely.

shubh
Автор

You walked the razor thin edge of going too fast or leaving steps out, or going to slow and making them want to skip, like a magician. (you must have backpropagated the consequence of almost every word, to come up, with the perfect lecture with the lowest loss!) Thanks you so much Andrej, even I was able to keep up, and I am going to show off my knowledge at the pub and library.

peters
Автор

This is the single best explanation of backprop in code that I've seen so far. I've once implemented a neural network from scratch, except autograd, so Micrograd is a good fit and so clear and accessible. Thanks Andrej!

ThetaPhiPsi
Автор

It takes real talent, dedication and complete mastery of the subject matter to breakdown difficult technical topics so clearly. Its also clear that Andrej is having fun while he elucidates. This is simply the most amazing series of educational videos on the internet on these topics. I hope you continue to put out more material like these.

nkhuang
Автор

Just wanted to say a big thanks to you Andrej and the team working on this. Truly amazing, the clarity with which you explain these things is impressive and inspiring! Looking forward to see the remaining videos and even more. Thanks again!

lawrenceadu-gyamfi
Автор

Thank you so much for doing a step by step simulation on how gradient descent works. I am grateful for the passion and effort you make in order to teach. These lessons are very essential as we continue to dive deep into learning.

kerwinmarkgordo
Автор

This is literally gold, you have explained everything so intuitively and made it so much easier to understand!
Thank you so much Andrej for sharing this in-depth knowledge for free!

bycloudAI
Автор

Great teacher with great background and expertise. We're lucky to have him spending his time to share his knowledge with anyone who wants to learn, for free. Looking forward to more videos.

bergonius
Автор

Many others have already said it, but thank you so much for making this. I've been trying to learn machine learning for many years now (in several short-lived attempts), and this lesson was a huge missing piece for me. Understanding the calculus behind it all, and how to really grasp how the weights and biases affect the output, really made back propagation and the learning flow click for me

chris_piss
Автор

I can't even comprehend the level of mastery it must take, to be able to distill such a complex topic in such a simple format, and the humility to give it our for free so that others may learn.

Thankyou so much Andrej for doing this, you're truly amazing.

harshmalik