Teaching our neural network to think - Let's code a neural network in plain JavaScript Part 2

preview_player
Показать описание
🔗 Playlist for this series

🔗Code from the episode

🔗 (Patrons only) Official discussion topic for this episode on the Fun Fun Forum:

🔗 Support the show by becoming a Patreon

🔗 Daniel Shiffman video that inspired this video

🔗 Intro to ObservableHQ

🔗 But what is a neural network by 3blue1brown

🔗 mpj on Twitter

🔗 Help translate the show to your language

Let’s make a neural network, completely from scratch, in JavaScript! No machine learning libraries, no prior knowledge of machine learning, statistics, advanced math and no diving into neuroscience, just the plain code. In this episode, we look at how to improve the training of our neural network using a learning rate.
Рекомендации по теме
Комментарии
Автор

This is my new Monday morning "talk-show" to watch while I drink my coffee! Love the entertaining exposure to JS.

Kalmanheyn
Автор

I love your show! Always challenging us with something new. Your old videos remind me of how much I've learned since I started watching, your new videos consistently remind me I have much more to learn. Thank you!

jacksonlenhartmusic
Автор

I was kind of confused with your explanation at first, but it was pretty clear after the tests! Thank you a lot!!

carlos.arenas
Автор

The sum number is not a blob, the idea behind the neuron is to take inputs and transform into one single output which in our case can only assume 2 outputs (-1, 1). So depending if the blob is positive or negative we'll know the output. The reason why we multiply might be just because it's easier to generate a number, that can immediately be classified into two possible groups. In our case we just need to check if it's positive or negative. If we would do another operation like adding the weight then, we would need to work with a scale which seems less intuitive. So technically you could use any other method as long as you could find a way too classify them into two possible outputs.

ruimmvilela
Автор

I think usually you would constrain the input and output values from -1 to 1. The input would be scaled (point.x/maximumWidth) and the output would be sig(sum).

DerIstDerBeste
Автор

2:08 "it's just some bloody number" - this hits the nail on the head of the power, and limits, of neural networks.
A neural net takes inputs and make outputs based on simple calculations. We can train the network - that's the great thing - but the parameters and the computation "mean" nothing - that's the terrible thing.


The result is, neural nets give results (a trained system that does what we want) but the results don't have a reason for being this way, other than "we trained it and the parameters are this now". Working out what the parameters "mean" once the network is trained, is one of the major problems of neural network study.

cboisvert
Автор

"Gradient descend" may scare those who didn't have any experience on AI or Machine learning, your way of mentoring is good and more acceptable for many people afraid of Math and Data, cool

mengyangchen
Автор

Kinda late to the party here, but to consolidate my understanding of the idea and summarize the steps:

1) We get an expected value from the randomly weighted points (the magical sum).

2) Instead of determining what team it falls into with a formula, we generate a training function that understands the error deviation with the formula.
In other words, we aren't using the formula to determine the numbers, we are using it to determine the error.

3) By providing an error to the weight training (buffed BRUH!!!!) function, we see how far from
weights.x
weights.y
we actually are... (the error ranging from 0:2)
for example, if error = 0, then x = weights.x + (point.x * (0)),
so x = weights.x
if error = -2, then x = weights.x - point.x * 2

4) trainedWeights does this process with a few points, showing the error (training) of the last trainedWeight.
This is where technically*, over a certain amount of iterations (points), the "correction" of the error should tend to 0.

*The technically part is the one that remains unclear. Could you maybe place the actual "correction" into words, please?

tekv
Автор

First. Btw, liking these videos, Mpj. Keep up the good work!

ryanisvibing
Автор

some time ago while I was in college we had a mini course on something called genetic algorithms where you would want to calculate say for example the maximum of a mathematical function and you would start by running random values through it and based on the result and a fitness function you would compute new values from the old ones and rerun the cycle many times. as sessions went by the new values tended to be better than the old ones (you could say that they converge to at least one of the maximum points of the function). it's somewhat familiar with what happens here.

victorb
Автор

How are you able to assign variables inside the object trainedWeights at 13:51? What is that called? It's not a function.

So its like an IIFE(Immediately Invoked Function)?

nazarm
Автор

Still a tad bit confused lol... Gonna take me some time to get this down but good hope to see how i can implement this into some type of game hack..

xploit
Автор

Did anyone else get tripped up on the syntax

trainedWeights = {
const example1 = ...
...
}

???


Can someone explain how that works to me? Can't seem to get anything from Google.
I'm building this in React, not Observable, if that helps.

dillonharless
Автор

Thank u for this awesome video, I am waiting for next video

AbhishekKumar-mqtt
Автор

So, the train function, the calculation which returns new trained weights (weight + point * error) is that something quite generic (where does it come from, what mathematical process perhaps), or it's something very specific to this problem? Love the video, thanks! :)

domski
Автор

Was the video long? Felt like it was too short...

luanlmd
Автор

Wow MPJ This video is cool. It actually simplifies the hidden layer of the neural network. I think the margin of error is acceptable because your want it to think like a human and not like GOD. Therefore never make your AI results precise, keep them experienced.

djsamke
Автор

So at first, I thought you would use some library that does the AI, but it turned out even better by creating our own AI (so to speak). My question though, is are you going to introduce us to some libraries that help us in different scenarios (such as Image recognition, voice recognition, etc..)? For like you mentioned, it's a basic example that will help us understand the concept, but not actually practical.

ilmammourtada
Автор

Sorry for the dumb question. But could you explain de mathematical logic behind weight + error formula? Why is this improving the AI?

andresgutgon
Автор

Have you never made any vanilla JS classes.... For beginners?

geraldfoushee
welcome to shbcf.ru