Neural Networks from Scratch - P.5 Hidden Layer Activation Functions

preview_player
Показать описание


#nnfs #python #neuralnetworks
Рекомендации по теме
Комментарии
Автор

Bro the effort he puts in to make us understand this stuff is highly admirable. Thanks for doing this man. Will be waiting for pt. 6

tuhinmukherjee
Автор

Dude, you're a legend. Bought the ebook Pre-Order yesterday, absolutely CANNOT WAIT for full release. My favourite thing about your videos, is your enthusiasm. For example, at 8:38, "What's so cool about ReLU is it's ALMOST linear, it's close to being linear, but yet that little itty-bitty bit of that rectified clipping at 0, is exactly what makes it powerful; as powerful as a sigmoid activation function, super fast, but this is what makes it work, and it's so cool! So WHY does it work??" Dude, I've never been so PUMPED to learn from someone with such enthusiasm in my LIFE. You take all the time you need to do this man, do it your way, and take your time, and you'll change the world. Thank you so much. Much love from Ireland. edit: spellings

ConorFenlon
Автор

18:17 Seeing the neurons fire when activated and die when deactivated really helped to see what really goes under the hood of a neural network. Thanks for this really helpful animation and the whole nnfs initiative as a whole.

nishantsvnit
Автор

This is by far the best explanation of how neural nets work that I have ever found. This should be it's own standalone teaching. The sine wave example with visuals - perfect! Thanks so much.

jonathanmorgan
Автор

I took my first machine learning course last semester and unfortunately all of the activities we did looked like those from the CS231 class you mentioned--no explanation, just code snippets and output. They were doable but considering it was most students first foray into python, it was quite a rough time to say the least. However, I am extraordinarily pleased to have found your channel and this series in particular--your instruction has helped more in the last 5 videos than my entire semester at university. Thanks for doing what you do.

karatekid
Автор

Bro, I just felt obligated to leave a comment for the perfect video you have made. This was literally the best visualization I have ever seen on youtube. This video deserves an oscar.

emado.
Автор

I really struggled with the explanation on feature sets / features / samples / classes, I definitely don't think I fully get it (first time that has happened in this series so far!) The animation you mentioned would for sure help!

RoughlyAverage
Автор

I know I'm late to the party, but the animations are amazing. I watched the double neuron part probably 20 times with the sound off to figure out what was going on. I had a recommendation for the animation and as I was typing it, I realized that I STILL didn't fully understand what was going on. I've got it now - thank you for the animations! This would be MUCH more difficult without them.

Specifically - the input of the second neuron going "backwards" was bending my brain.

TonyTheTrain
Автор

Hey sentdex, since the other parts are still in the works I’d like to give some feedback. Thanks for doing all this, the graphics help a ton to see how everything works. The only suggestion is to explain why the different concepts even exist, with some real life examples. This looks like it would be great for someone experienced that has used activation functions and everything else you discuss, and now they would like to closely see how it works. For a noob like me, it is not clear why they even exist, and it feels a bit like we are just listing different concepts without a clear picture of why, and what we are trying to achieve with this network. For example when you were showing how well the ReLu fits the data its not clear if that is actually desirable since it seems to overfit the data.

mariyanzarev
Автор

This video just blew my mind. I still haven't bought the NNfS book yet. But this doesn't reflect how much it love to watch and re-watch your videos. This series will probably stay State-of-the-Art for a long time. Thank you!

mizupof
Автор

I'm going to be very good someday at building/training neural nets. It's all because of my curiosity that made me stumble on this fantastic now I'm reading your book and practicing (coding after reading between the lines and understanding the theory) and consulting this playlist and several other resources in order to gain a deeper understanding. Thank you so much for being really amazing.

muna
Автор

The Y axes being shortr than the X axes just made a perfect explanation imperfect. With the biggest problem being that i dont know if you re using the same on the little graph indicators above the neurons.

lucadellasciucca
Автор

40 mins ! Oh boy this is gonna be good

vedangpingle
Автор

This is the best explanation I have seen when it comes to why Relu creates a non linear line...

KumR
Автор

I just wanted to thank you for all this stuff, I am in the process of getting a PhD in neuroscience and artificial neural networks seems like a great tool to help with research. You make it really clear, and unlike other tutorials that tend to just show how to use certain libraries you really get down to how they actually works. As soon as the book is out I am getting a physical copy!!

crohno
Автор

My guy cannot decide where to put his camera

horticultural_industries
Автор

This week was the hardest to pass of this quarantine! Please don't make us wait so long 🙏🏻🙏🏻🥺🥺

parasjain
Автор

finally a video giving a clear insight of an activation function.
This is by far the best explanation of activation function I've come across. Really appreciate your work behind this series and getting into the crux of these topics.

NikhilSinghNeil
Автор

This is the best explanation video of how activation function works in the WWW 🚀. And thank you the one who put his time and effort in creating such beautiful animations for us. Thank you very much ❤

zendr
Автор

Absolutely brilliant explanation as to why a non-linear activation function can lead to good mapping of desired non-linear outputs. This is actually an extremely pertinent topic in my field of study (electrical engineering, power systems, which for three phase AC circuits have non-linear power flow solutions). Seeing "how" these ReLU neurons can model non-linear functions is absolutely mind blowing. Bravo!

michaeljburt