Neat AI does Recurrent Connections

preview_player
Показать описание
This one deals with recurrent connections in the NEAT algorithm and the impact they can have on both the XOR solution and the Asteroids game pilot..

In Asteroids I allow them to be enabled and there is a marked difference in the human like play that emerges as a result.

I also have an XOR solution in Excel so you can see how the data flows from node to node and how the fitness function is built up.

Music :
Рекомендации по теме
Комментарии
Автор

this needs more recognition, I've never seen a NN loop back on itself.

okboing
Автор

It's getting pretty good at blasting asteroids.

Kraus-
Автор

hi! just want to point out, before my tiny rant, that I love your videos!
So, this isnt really how recurrent networks (or RNN) works.
If you want to make a layer recurrent, you need to make a new hidden layer, with its own weights, and with its input being dependant on the layer you chose to make recurrent.
its input: at the 0th state, the input is all 0, during all other iteration of your recurrent layer, its input is the output of your layer during the previous iteration.
so:
if you are not using an alternative to backpropagation (here NEAT AI does use an alternative) you need to use BTT (backpropagation through time) to train the weights of the recurrent layer.

dough
Автор

What language are you using? Also, what GFX lib? I'm sorry if you mentioned it already or if I asked you before, I may have forgotten.

typicalhog
Автор

I'm not sure if you are using multiple mutatable activation functions or not, but I got another potentially interesting idea. Two new types of neurons. Integrator/accumulator neuron that would sum up the states into a "pool?" and a derivator/delta? (I really don't know all the math terms that could be applied here) a neuron that would return the change in its state's value instead of returning the state like a normal neuron. Another possibly emergent property could if derivator/delta? neuron fed its absolute output value into integrator/accumulator neuron. (Meaning the pair would essentially "collect" changes in values and you get something that would measure? volatility?). There could also be a decay factor to prevent the values in the accumulator to go to infinity. I might do a lil sketch in paint cause I'm really bad at explaining this.

typicalhog
Автор

XOR is probably the simplest thing a non-recurrent network can learn to solve. The simplest problem I can think of that would benefit from recurrent connections might be counting to 10, or generating Fibonacci numbers? If you wanted to make a video that focuses on recurrent connections more, you could do that, or even a simple memory game. Let's say a 4x4 grid where we try to get the AI to find all the pairs with fewer tries than just choosing random cards.

typicalhog
Автор

Recurrent connections keeps a memory about the ordering of data,
so, to get a good performance on Xor data, the only thing you need to do is to shuffle the data randomly every iteration.

tomoki-vo
Автор

1:32 I'm gonna blew yalls mind. This is how life just flows as fractals and everything repeats as mimicry. Dunno who or what before us but now we're doing what it was doing like a domino effect and soon, we'll built another version of... us. This NEAT algorithm already prefers a solution that a lot of us as kids did in fighting games: that annoying yet effective, continuous bursts of endless low kicks! It's alive😅

captainjj
Автор

Isn't it a bit weird to feed the recurrent connection into the input layer? I feel like it should be between different hidden layers or from the output layer to a hidden layer. The input layer shouldn't be messed with.

Dalroc
Автор

I'm not quite following. So the previous output of the node gets fed back to the input node and added to its current value? How many times is this done? Is it just one previous value each time or do they build up and average?

Also if recurrent connections connect back to the input layer, then do input nodes also require activation functions? Usually they just take a scaled input and pass it on without any squashing function applied.

domc
Автор

Is there a place to see the code for this?

DavidGillespie