Neural networks [6.2] : Autoencoder - loss function

preview_player
Показать описание

Рекомендации по теме
Комментарии
Автор

Amazing explaination of the loss functions used !

amarbudhiraja
Автор

Dear Sir Hugo, i absolutely love your teaching method and the content you cover for each section in this series. Are you Planning to post any more of your lecture series in English, in near future? in any case, i thank you so much for sharing neural networks series in english, it is invaluable for me.

invinity
Автор

Hello Dr Hugo, I am amazed by your teaching style and the content you explained. Although, I am confused towards the end of the video. Choosing parameters dependent on the hidden layer output that makes the loss function similar to the fundamental equation. Is there any reference for it so that I read it in detail ? Also what will happen if choosing the parameter does not turns the log-likelihood into the same fundamental loss function ?

baqarrizvi
Автор

It is not actually necessary to have the encoded value be of the same data type as the input data. For example real valued inputs could be encoded as categories and then re-constructed into reals. Similarly Categorical inputs could be encoded into real value enccoded vector and then Decoded as a real value Estimate of the original data

geoffreyanderson
Автор

Hi Hugo, thanks for detail explanation, one question here for cross entropy at 04:06 you mention sum of PROBABILITY of X - i am lost here in cross entropy we have actual value of X not its probability i.e. X = [1, 0, 0] and predicted Y = [0.7, 0.2, 0.1] then cross entropy would be sum(X * log(Y)), what i am missing here?

TheOraware
Автор

In the loss function Is it the case that there are two more terms like xlog(1-x^)+(1-x)log(x^) and u have neglected them due to less likelihood of occurence ?

harshpandey
Автор

Great explanation thank you! I have one question, what is the reason for not using an activation function on the output? thanks.

jamespaz
Автор

It seems that adaptation to the input actually serves as a way to learn the probability distribution of input using autoencoders. If I am right, what is the advantage of using autoencoders over RBMs (reasons other than training method)?

emirceyani
Автор

I still don't understand the gradient flow in the tied weights, can you please explain briefly?

sandeepinuganti
Автор

Hi, what is the sigm for binary inputs? Is it the same as 1/(1+exp(x)) or is it the inverse of the sigmoid, -log(1/x - 1)? Thanks

mikef
Автор

Hi Hugo. Thank you for your video. I have some confusions here. When we initialize bias for one layer (either in RBM or in auto encoder or any neural net work in general), do we initialize it as a scalar or as a vector (do we add the same bias to every node or add different biases to different nodes in the same layer)?

tonglin
Автор

is Autoencoder just using feed forward propagation? or using backpro too?

thanks

situkangsayur
Автор

What are the extrema of the loss function, given some input vector size n? is it simply plus or minus n?

Dansaunders
Автор

Regarding 6:05, the minimum of binary CE is not zero when x=x^hat means for example -0.8 log(0.8) - (1-0.8) log(1-0.8) = 0.5 but -0.9 log(0.8) - (1-0.9) log(1-0.8) = 0.36. Go figure!

welcome to shbcf.ru