Neat AI does XOR Mutate

preview_player
Показать описание
The final installment in the Neat XOR series which details how weights, nodes and connections are added when the genomes mutate

Music :
Рекомендации по теме
Комментарии
Автор

I’ma come back to this channel one day finding that he has 100s of thousands of subs knowing I was one of the first

jpeg
Автор

Implementing NEAT is really hard and annoying because specific details are not provided even in the original paper… contents like this would be truly helpful!!! Thank you very much

상당히-mf
Автор

Commenting for the algorithm. You need and deserve more visibility.

nastrimarcello
Автор

This is going to hit the algorithm. Just have a feeling it will

magneticchaos
Автор

Your videos are very informative! There is just one thing that I did not understood, how do you add a new id for the new node? I know that the connections I need to asign a new innovation number and the ids of the new nodes, but If I just add a new node for one individual, it's possible that another individual that already have another node assigned, could create a different id for the neuron in the same spot, right?

thiagohenrique
Автор

Any good sources for a beginner in machine learning?

nastrimarcello
Автор

Would it make sense to also have mutations that just disable the connection (without adding a node), as well as disabling nodes? Ofc also mutation to enable nodes. Also mutations that fully remove nodes and connections. (I guess could help prune useless stuff)

typicalhog
Автор

For some reason, my NEAT XOR always decides to use the gaussian (std::exp(-(x * x))) activation function (like 9/10 times).
The optimal net has 2 input nodes and 1 output node, both nodes connected to the output one. (No hidden nodes cause each node has its own bias (so it can solve it with just 3 nodes and 2 connections), I know it may be better to use a dedicated bias node and let the nodes make connections from it but this seems simpler and semantically more similar to the natural NNs).

This makes me think it actually is beneficial to let nodes mutate their activation function. (I can't remember if you said you are already doing that or not).

typicalhog
Автор

Can you create a video of some implementation details on the topology itself, the calculation (runNetwork method), etc.. It's also interesting how you change the LayerNumber of each node in the mutation process, when u add a new node. Just recalculating each node's layer as a path to the input seems slow..

DimitarDimitrov-ccqy
Автор

When adding a new node - you said the weight of the first connection is set to the weight of the disabled one and the 2nd one is set to a random value. I believe Ken sets the 2nd one to 1 in his paper (to make node addition less impactful). Do you think it's better to do it your way? Or were you not aware of the other way? Maybe it's irrelevant, too.

typicalhog
Автор

Does it even make sense to re-enable a connection if there has been a node placed onto it already? Wouldn't it just "bypass" the non-linearity caused by adding a node?

typicalhog
Автор

Hey, how did you do the algorithm for finding which layer the node is in? Any code, examples or help would be appreciated!

mythic
Автор

I get a probelme for when I add lots of connectiosn that there is not a path back to the input layer, any reason why this would be?

stephenlavender
Автор

What is the chance of a node being added?

myyraankka-