AMMI 2022 Course 'Geometric Deep Learning' - Lecture 4 (Geometric Priors II) - Joan Bruna

preview_player
Показать описание
Video recording of the course "Geometric Deep Learning" taught in the African Master in Machine Intelligence in July 2022 by Michael Bronstein (Oxford), Joan Bruna (NYU), Taco Cohen (Qualcomm), and Petar Veličković (DeepMind)

Video recording of the course "Geometric Deep Learning" taught in the African Master in Machine Intelligence in July 2022 by Michael Bronstein (Oxford), Joan Bruna (NYU), Taco Cohen (Qualcomm), and Petar Veličković (DeepMind)

Lecture 4: Invariant function classes • Learning under invariance • Compositionality • Multiresolution analysis • Scale separation • Combining invariance and Scale separation

Рекомендации по теме
Комментарии
Автор

Dear Prof. Bruna, I think the deformations of the medieval painting at 4:55 are not diffeomorphisms. The colors change substantially which means that it's not only the underlying space being transformed but the mapping to R^3 as well

martonkanasz-nagy
Автор

Why is there only a single linear invariant operator? How about translation?

alivecoding
Автор

Will the book contain the latest Geometric Deep Learning works such as equivariant diffusion models or graph transformers?

maxusarron
Автор

I'm having trouble with the concept of domain->signal mapping. Juan says that the high dimensional domain of an image maps to a signal with a 2d representation (X, Y). shouldn't it be 5 dimensional? (X, Y, R, G, B)? I think I'm having trouble understanding the difference between the "signal" and the "underlying geometry of the domain".

English
Автор

Hi Dr. Bronstein, why is the smoothing operator able to make the entire hypothesis G-invariant on the page at the 10:19 mark? Isn't the f* that we are unable to access the only thing that is G-invariant? Thank you!

daqianbao
Автор

Is equivariant the same as augmentation?

heshamali