AMMI Course 'Geometric Deep Learning' - Lecture 1 (Introduction) - Michael Bronstein

preview_player
Показать описание
Video recording of the course "Geometric Deep Learning" taught in the African Master in Machine Intelligence in July-August 2021 by Michael Bronstein (Imperial College/Twitter), Joan Bruna (NYU), Taco Cohen (Qualcomm), and Petar Veličković (DeepMind)

Lecture 1: Symmetry through the centuries • The curse of dimensionality • Geometric priors • Invariance and equivariance • Geometric deep learning blueprint • The "5G" of Geometric deep learning • Graphs • Grids • Groups • Geodesics • Gauges • Course outline

Рекомендации по теме
Комментарии
Автор

This is truly amazing. I finished my bachelor in mathematics, with a thesis in differential geometry, and I just started studying a masters degree in Artificial Intelligence Research. I saw some articles on geometric deep learning, but nothing as complete as this. I think this beautiful field fits my interests perfectly and I think I'll orient my research career in this direction. Thank you very much for this.

Автор

What a good time to be alive! I’m going to enjoy this playlist.

fredxu
Автор

Oh wow! Ties together so many areas I’ve been interested over the years - with concrete, intuitive, applications.

petergoodall
Автор

Thanks so much for doing this and putting it online for free. Generative models + Gauges fuel my dreams.

vinciardovangoughci
Автор

Hi Dear Professor Michael Bronstein, Congratulations for the great job you and your team are doing in the field of AI. Im going to my junior year at university and kinda failed in love with the Goemetric deep learning. Hopefully these lesson and the paper will help me to understand more about. Thanks for sharing, All the best.

edsoncasimiro
Автор

Bravo Michael! I really love that you put things into a historical context - that helps us create a map (a graph :) ) of how concepts connect and evolve and by introducing this structure into our mental models it's easier to explore this vast space of knowledge.

TheAIEpiphany
Автор

This is just pure coincidence. I'm currently interested in this topic and this amazing course poped up. Thank you very much Prof. Michael for opening these resources to the public. I might try to get in touch with you or your colleagues to discuss some ideas. Regards! M Saval

marfix
Автор

I just started reading your book "Numerical geometry ... " today out of curiosity and this shows up on youtube. I'm looking forward to learning something new 🙂

jordanfernandes
Автор

I had seen your previous ICLR presentation on the same topic and was still not clear about the invariance and equivariance ideas! Now finally I got hold of the concept of inductive biases (geometric priors) that must be ensured for model architectures
1. images - shift inv. and equiv.
2. graphs - premutation inv. and equiv.
3. sequences/language - ??
and for any other tasks we may encounter - we need to identify which property w.r.t. the resulting function should be invariant and equivariant! Thank you very much Sir for generously putting it all out there for the public good.

samm
Автор

Excellent Lecture. Thanks and appreciate it.

channagirijagadish
Автор

This will keep me busy for the next few weeks!!

gowtham
Автор

23:41 I don't understand why we can simply permute the nodes on the caffeine molecule willy nilly like that? The binding energy depends on what the neighboring atoms are, the number of bonds and also the type of bonds. How can all of this information be preserved if we permute it at will like this? For example the permuted vectors here show all the yellows next to each other when in the actual molecule there are no neighboring yellows at all!

evenaicantfigurethisout
Автор

The portrait on the left @5:35 is Pierre de Fermat and it says Desargues 😅

Alejandro-hhub
Автор

today I got the book that Dr. Bronstein suggested "The Road to Reality" by Roger Penrose...wow I wish that I had came across this book wayyy earlier. If I had this when I was in early undergraduate I would had much much more fun and motivation to study physics and mathematics. This is just amazing.

fredxu
Автор

Oh my God thank you very much for your effort

Dr.Nagah.salem
Автор

This is fantastic !! It's great to have access to such amazing content online. What are the prerequisites for understanding the material? I'm aware of basic signal processing, linear algebra, vector calculus and I work (mostly) on deep learning. I'm learning differential geometry (of curves and surfaces in R^3) and abstract algebra on my own. Is my background sufficient? I feel a little overwhelmed.

madhavpr
Автор

Thanks you Michael,
Is there anny chance we could access an certification or exam for guaranteeing the knowdlege ans maybe put that in our resume ?

I would really appreciate that!

abrilgonzalez
Автор

I didnt understand most things mentioned here. hopefully the later lectures make provide more insight.

randalllionelkharkrang
Автор

Thank you Dr. Bronstein for the extraordinary introductory lecture. Really excited to go through the rest of the lectures in this series! I have 2 questions based on the introduction:
1) When discussing the MNIST example you mentioned that images are high dimensional. Could not understand that point as generally the images such as the MNIST dataset are considered to be 2-dimensional in other general DL/CNN courses. Can you elaborate more on how the higher dimensions emerge or how to visualize those for cases such as the MNIST dataset?
2) In case of molecules, even though the order of nodes can vary, the neighborhood of each node remains the same under non-reactive conditions (when bond formation/breakage is not expected). In such cases, does permutation invariance only mean the order in which nodes are traversed in the graph (Like variations in atom numbering between IUPAC names of molecules)? Does permutation invariance take into account changes in node neighborhood?
I apologize for the naive questions professor. Thank you once again for the initiative to digitize these lectures for the benefit of students and researchers.

sowmyakrishnan
Автор

Hi Professor Bronstein, what is the practical way of handling graphs networks of different sizes? With a picture, it’s easy to maintain a consistent resolution and pixel count, but with graphs and sub graphs you could have any number of nodes. Is it typical to just pick a maximum N one would expect in practice and leave the unfilled nodes as 0 in the feature vector and adjacency matrix? If the sizes of these matrices are variable, then how does that affect the weights of the net itself?

justinpennington