Deep Learning(CS7015): Lec 4.6 Backpropagation: Computing Gradients w.r.t. Hidden Units

preview_player
Показать описание
lec04mod06
Рекомендации по теме
Комментарии
Автор

This guy is a magician. He is a magician. Mark my words. Thank You Prof M. Khapra.

rohit
Автор

If we already calculated the derivative of the loss function wrt the output layer, why are we finding the derivative of the loss function wrt hidden layers again? Should we not find the derivative of the output layer wrt hidden layers according to the initial chain rule that we saw???

dailyDesi_abhrant
Автор

W 3, 2, 4 => Output of 2nd layers 2nd neuron is the Input for 3rd layers 4th neuron Is it correct?
In other words,
W i+1, j, k => Output of ith layers jth neuron is the input for i+1th layers kth neuron.

bhargavpatel