Neural Networks - The Math of Intelligence #4

preview_player
Показать описание
Have you ever wondered what the math behind neural networks looks like? What gives them such incredible power? We're going to cover 4 different neural networks in this video to develop an intuition around their basic principles (2 feedforward networks, 1 recurrent network, and a self-organizing map). Prepare yourself, deep learning is coming.

Code for this video (with coding challenge):

Hammad's winning code:

Ong's runner-up code:

More learning resources:

Please subscribe! And like. And comment. That's what keeps me going.

Follow me:
Signup for my newsletter for exciting updates in the field of AI:
Рекомендации по теме
Комментарии
Автор

wow... this 11 min video took me 2 hours to understand most of it. You did a really good job putting ALL that information in such a short amount of time. Great job Siraj, keep up the good work!

danielparrado
Автор

This video went from 0 - 100 real quick.

PaulGoux
Автор

Those beats... deserved a rewind all on their own. A beat souffle I would say.

lionelt.
Автор

Hey Siraj, you're AWESOME! Nothing less. I am watching your videos to learn Machine Learning while my college admissions are going on. Never stop, cuz i too want to see AI solved in my lifetime.

basharjaankhan
Автор

All the talk about neural networks from conferences to individual series are cool, but what a lot of people aren't clearing up is exactly how to apply it based on real-world example. Its like giving a person an engine and showing how the engine itself works, but one person may want a car engine, another may want a boat engine, another may want a jet engine and another may want whatever engine the Starship Enterprise uses. So in all actuality, there is not really any information on how to use neural networks so that programmers can use it to apply to whatever problem.

Murderface
Автор

I really appreciate all the work you're doing with these videos. Sorry for my caustic comments before. I am a rank ammeter. You're videos are getting better and better.

Throwingness
Автор

Terima kasih mas siraj, saya di kasih tugas karena anda

novansyahherman
Автор

Take a shot everytime he says function.

Great vid btw

saminchowdhury
Автор

hello Siraj, make a video talking about what is necessary to start to learning machine learning, like basic math necessary and programming language to learn before start.
sorry for my English, I'm Brazilian, thanks

davidutra
Автор

can anyone please explain me
why derivative of sigmoid function is taken as x*(x-1) . ??

tanmayrauth
Автор

Thanks for the amazing info mate.
In the Fast Ai course, they say: one should learn the code first then the theory, but you prove them wrong in my opinion.
Thanks again my friens.

wibiyoutube
Автор

This video is awasome!!!! Thank you so much :)

Tozziz
Автор

Kind of late but, could somebody explain why the random wight matrix at 2:15 is multiplied by 2 and minus 1? I tried without them and it worked pretty much the same, but I'm doing the simple AF one...

jnchuika
Автор

Can someone clarify the part at 2:26 about dot product and matrix multiplication? It says that they're the same, while they're completely different, dot product producing a scalar, and matrix multiplication producing a matrix.

Wherrimy
Автор

I liked the "LOVE" equation was too good.... Thanks Siraj :)

BiranchiNarayanNayak
Автор

Siraj, Could you kindly provide us with an example (tutorial) on how properly to update a trained deep learning model based on new data (lets say from a sensor)?

ebimeshkati
Автор

I have a problem in the last line of code .In your notebook u have this -'
#testing
print(activate(np.dot(array([0, 1, 1]), syn0)))
[ 0.99973427 0.98488354 0.01181281 0.96003643]'
So when i just copy-past this i had an error like NameError.Then i 'from numpy import array' and got different result from activation function.it was like that = [ 0.36375058].What the prroblem?
layer2_gradient = l2_error*activate(layer2, deriv=True) .In this line we have l2_error parametr.Instead of this u need to use layer2_error).Thank you

DrewBive
Автор

Hey, how do we optimize the total number of hidden layers required and number of neurons present in each layer for a model.

e.g., Like a image recognition problem can be solved by having 2 hidden layers and each layer having 100 neurons each but same can be solved by using 5 layers each having 400 neurons.

So how do we optimize these numbers ?

ranojoybarua
Автор

Siraj, I wonder.
Sigmoid function is y = 1 / (1 + e^-x). It's derivative is equal to e^x / (e^x + 1)^2
Why in this video are you using different function as derivative? x*(1 - x)

MrDominosify
Автор

When time mattered in the input sequence then RNN Comes in. Good.

computersciencebasis