AMMI Course 'Geometric Deep Learning' - Lecture 9 (Manifolds & Meshes) - Michael Bronstein

preview_player
Показать описание
Video recording of the course "Geometric Deep Learning" taught in the African Master in Machine Intelligence in July-August 2021 by Michael Bronstein (Imperial College/Twitter), Joan Bruna (NYU), Taco Cohen (Qualcomm), and Petar Veličković (DeepMind)

Lecture 9: Euclidean vs Non-Euclidean convolution • Manifolds • Tangent vectors • Riemannian metric • Geodesics • Parallel transport • Exponential map • Convolution on manifolds • Domain deformation • Pushfowards and Pullback • Isometries • Deformation invariance • Scalar and vector fields • Gradient, Divergence, and Laplacian operators • Heat and Wave equations • Manifold Fourier transform • Spectral convolution • Meshes • Discrete Laplacians • ChebNet • Graph Convolutional Network • sGCN • SIGN

Рекомендации по теме
Комментарии
Автор

This lecture is so informative! I really enjoy the teaching style, of mixing theory with the history behind it.
IT serves as both an interlude for motivation and a short break between the theories.

fredxu
Автор

This is one of the best lectures I've ever seen. This series of lectures is wonderful. You can't even imagine the opportunity you gave people to sharpen their skills in GDL through this.

NileNetworks
Автор

Very information-packed lecture! First part was fairly easy to follow along. Some notes/questions:
25:00 It'd be cool to see a visual example where exp_u is not a global diffeomorphism
36:35 You lost me at pullback metric - why isn't the metric definition in the tangent plane of the Omega~ the same as the one in Omega?
51:03 So how is gradient defined? Do we do infinitesimal displacements in the tangent plane and see how the function changes? Is that the idea?

TheAIEpiphany
Автор

I would like to understand all this in this lifetime.

ekbastu
Автор

Very excited about this fantastic GDL course! I would be remiss if I didn't say how much I appreciate it.

I was wondering if I understand GDL correctly?
The inputs of the neural networks are geometric objects like grid, graph, manifold in high dimensions. and the hidden layers are simple geometric objects like triangular meshes etc. and the cost function determine the geometric similarity between output and target, and then use exterior calculus to compute tangent gradient and backpropagate to perturb the weights and bias? what are the weights and bias and activation function in GDL? the weights are like graph Fourier transform?

Could you please kindly recommend some books on the background knowledges like gauge theory, groups, or geometric algebra and some reference books for this course?

ucoldplay
Автор

another question I have is: for around 1:04:00, are the position dependent spatial kernel somewhat similar to if we use a wavelet transform in the spectral domain (e.g. wavelet transform on manifold?)

fredxu
Автор

is the following the reason for Laplace-beltrami operator being intrinsic?
It is defined by inner product of intrinsic gradient defined by Riemannian metric, (in the lecture's notation there's no subscript under the inner product bracket)
and intrinsic gradients are intrinsic because they are defined locally by the Riemannian metric.
So Laplace-Beltrami operator is intrinsic.

fredxu
Автор

A lot of real estate on the slides is taken up by the list of zoom participants which doesn't add value to the content. Many elements of the slides are hidden by this list. As an improvement suggestion consider hiding it next time ;-) (yeah it's a lot of work to make quality lectures available online ^^)

bajdoub
Автор

At 22:46 what do you mean by "amounts to rotation"? In which space?

bajdoub
welcome to shbcf.ru