Introduction to Neural Networks - Part 2: Learning (Cyrill Stachniss, 2021)

preview_player
Показать описание
Introduction to Neural Networks - Part 2: Learning (Parameter Learning, Stochastic Gradient Descent, Backprop)
Cyrill Stachniss, 2021

Errata in the video (corrected in the pdf file of the slides):
* At 55:23 the value of dL\df is not specified and only indicated as "...". This is suboptimal for the example as this value has to be multiplied with dL\da and dL\db. Thus, the example might be a bit misleading.
* At 59:37 the derivative of "z^2" is "2z" and not "z", thus the last dimension of the gradient in the example must be multiplied with 2.

#UniBonn #StachnissLab #robotics #computervision #neuralnetworks #lecture
Рекомендации по теме
Комментарии
Автор

Thank you Cyrill. In the Youtube's world your videos stand out; it´s pure high-quality and accessible science.

giovanitheisen
Автор

I think there is something about the way you look at the process of education, that makes the content, style and, and the presentation to be very clear, clean, understandable and exciting to follow, which sets your videos apart from many many other MOOC or Youtube videos online.

I wish more and more people could find these videos, I am sure they are helpful to many, and deserve much more viewers.

Thanks for making them

raminmdn
Автор

this is the best explanation of back prop i have seen so far

oldcowbb