Neural Networks 5: feedforward, recurrent and RBM

preview_player
Показать описание

Рекомендации по теме
Комментарии
Автор

i am programming a simple feed forward network and forgot the bias (b) variable ... thanks for remembering :D

Mikeeeeeeeeeeeeeeeeeeee
Автор

Could you tell a bit more about state of recurrent network? You said that usual feed-forward had no memory, for example, about previous image. But when we learn feed-forward network via back propagation each image (training data point) affects the weights of the network. Weights represents the condition of the network and of course they are affected by previous examples. So can you specify exactly what do you mean by states (the memory of recurrent network)?

iadduk
Автор

The RBM is so confusing, for every example i have come across on the internet.
For unsupervised learning, classification. It has no output? Give it data and gives
no output?
Does the visible end stop taking data from the out side world when dark
side shoot back to the visible?

I way i think it should work by setting the weights and let it bounce around until
only one single neuron was left on high, out of a bunch, on the dark side. And
that is were i would take my output from.
For example if top half of the image was black h1 would left on. If the bottom
half was black then h2 would be set high. If left half was black and the right
half was white h3 would be high and all other h's would be set low.

Thank for any help. I need it.

keghn

keghnfeem