AMMI 2022 Course 'Geometric Deep Learning' - Lecture 2 (Learning in High Dimensions) - Joan Bruna

preview_player
Показать описание
Video recording of the course "Geometric Deep Learning" taught in the African Master in Machine Intelligence in July 2022 by Michael Bronstein (Oxford), Joan Bruna (NYU), Taco Cohen (Qualcomm), and Petar Veličković (DeepMind)

Lecture 2: Basic notions in learning • Challenges of learning in high dimension • Learning Lipschitz functions • Universal approximation

Рекомендации по теме
Комментарии
Автор

Gorgeous explanation. I never got the risk thing from the book of Schoelkopf and Smola, but now things are clear!

andreasbeschorner
Автор

Having seen the first edition of the course which was already brilliant, the content is improved and easier to grasp this time. Well done, thanks!

SY-merk
Автор

Thanks for this beautiful lecture!

The decomposition of the error bound at 31:10 is really nice.

I'm wondering if there is a conceptual adaptation of that decomposition that would account for implicit regularization:
At the current decomposition, achieving good generalization is done through the tradeoff between \epsilon_{stat} and \epsilon_{appr}.
But in practice, the choice of the algorithm may affect the resulting model's ability to generalize. For example, with Stochastic Gradient Descent, a good hyperparameter choice may improve the overall generalization, even if it increases the optimization error (\epsilon_{opt}).

danielslutsky
Автор

I don't really understand the optimization error term when Prof. Bruna decomposes this whole thing. f hat is an arbitrary hypothesis and we're comparing the empirical risk of this random hypothesis to the empirical risk of the best possible hypothesis (under the constraint set Fdelta).

How could we even optimize f hat to reduce the optimization error since f hat is chosen arbitrarily ?

arisioz
Автор

To me, this lecture appeared quite hard, but I am not sure what I really could take out of it. I mean I knew most of this already, and the theory presented here wasn't really helpful, cause it does not appear to be round and complete.

alivecoding