AMMI Course 'Geometric Deep Learning' - Lecture 7 (Grids) - Joan Bruna

preview_player
Показать описание
Video recording of the course "Geometric Deep Learning" taught in the African Master in Machine Intelligence in July-August 2021 by Michael Bronstein (Imperial College/Twitter), Joan Bruna (NYU), Taco Cohen (Qualcomm), and Petar Veličković (DeepMind)

Lecture 7: Grids and Translations • Translation group • Shift operator • Linear invariants and equivariants • Fourier transform • Convolution • Fourier invariants • Deformation stability • Multiscale representations • Wavelets • Scattering • CNNs

Рекомендации по теме
Комментарии
Автор

Thanks for the lecture Joan! Feedback from my side: it was a bit harder to follow along as there is a lot of assumed prerequisite knowledge (especially during Fourier derivations and with wavelets) - which I probably have, but it was at times hard to spot connections between steps and slides.

12:45 re remark: yes but that looks more like a fun fact rather than something fundamental, as naturally each of the nodes is connected to both its neighboring nodes and not just its right neighbor? Why are right neighbors special?

TheAIEpiphany
Автор

At 15:41 it can be good to make a connection with previous lecture where you said that the only linear invariant is the average and that the scalar product with v is in fact the sum of all features

bajdoub
Автор

Also in 15:45 v is not necessarily 1 (vector of all ones) as any w=lambda*v with some arbitrary lambda in R also works. A more rigorous exercise is to prove that v is colinear with 1 rather than equal to 1

bajdoub
Автор

At 30:50 the development is not clear, it's hard to follow the motivation of introducing the modulus of the fourrier transform. Same for all the following developments introducing wavelets, it's very hard to follow the development and there is a steep concepts curve that is not lipshitz anymore compared to the first part of the lecture ;-) (or has a much larger lipshitz coefficient)

bajdoub
Автор

At 14:21 why are we only considering linear invariants in R and not in R^n for some arbitrary n. Also same question for equivariance why are we considering functions from R^d to R^d only and not from R^d to R^n for some arbitrary n ?

bajdoub