Theoretical Foundations of Graph Neural Networks

preview_player
Показать описание
Deriving graph neural networks (GNNs) from first principles, motivating their use, and explaining how they have emerged along several related research lines.
Computer Laboratory Wednesday Seminar, 17 February 2021
Рекомендации по теме
Комментарии
Автор

This is one of the cleanest, most sophisticated and organized scientific speeches I have ever heard...

kristofhorvath
Автор

Petar's talk are great as always! (I remember attending his talk while at Google lol).
Timestamps for those looking to rewatch specific sections :)

0:00 - Introduction by Pietro Lio
1:10 - Overview
1:56 - 1. Fantastic GNNs in the Wild
6:52 - 2. Talk Roadmap
9:00 - 3. Towards GNNs from first principles
10:34 - 4. Permutation invariance and equivariance
15:42 - 5. Learning on Graphs
20:22 - 6. Message passing on graphs
24:34 - 7. Perspectives on GNNs
25:42 - 7.1 Node Embedding Techniques
29:39 - 7.2 Natural Language Processing
31:23 - 7.3 Spectral GNNs
41:17 - 7.4 Probabilistic Graphical Models
45:09 - 7.5 Graph Isomorphism Testing
48:53 - 7.6 Geometric Deep Learning
50:23 - 7.7 Historical Concepts
51:15 - 7.8 Computational Chemistry
52:22 - Acknowledgements and Q&A

leodu
Автор

Beautiful presentation. Dr Velickovic is one of the best lecturers I've heard in my life. Everything he says is so clear and concise. Add his charisma on top of all that and you can understand why he attracts more and more people to study GNNs. We are so proud to have him

vladansaracpv
Автор

Excellent talk Petar, so useful to have these different perspectives brought together in one consistent framing.

KyleCranmer
Автор

Very friendly to a beginner, and there are large amounts of recourses for futhermore learning. Thanks a lot Petar!

shawnwang
Автор

Great talk! The first 20 minutes are simply brilliant! Kind of first principles explanation that I dream of when starting any new type of topic :)

adityamishra
Автор

Thank you very much! I just completed my undergrad, and I am in the process of discovering new ideas and topics to work upon and learn more. These kinds of videos really help me (esp as a young graduate who doesn't have much idea about multiple topics but want to discover more).

ceevaaaaa
Автор

Your presentation skills only became better since Cambridge times. And they were stellar then already.

pw
Автор

It was way informative and slides are self explanatory for the people with basic understanding of math equations :) Thank you !

syleshgupta
Автор

great presentation. Thank you for sharing, Dr Velickovic

nguyenthanhdat
Автор

Hi Petar, it's been a nice reframing of GNNs, thanks!
Noting that GAT can treat non-homophilic graphs strikes this analogy to me: If propagation is error smoothing then attention makes edge-aware smoothing (in image processing).

bayrameda
Автор

Petar! This is solid work. Clear thinking and speaking.

coderi
Автор

amazing talk! I like the way that you connect concepts with applicational and historical context. It makes me motivated to make this talk making all senses to a 7 year-old or a 107 year-old (:

lovexfuture
Автор

We need more lectures like this! Nice Lecture!

abyoussef
Автор

Hi, Petar, thanks a lot for the talk recording. Could you also release the slides of your talk?

amiltonwong
Автор

Rewatching some of this talk - it is that good!

emmarocheteau
Автор

Very nice lecture talk. Good GNN resources, tools and exposure

tasveerahmad
Автор

Great talk! It definitely improved my understanding about GNNs. Thank you!

blackguardian
Автор

Great presentation. very entertaining and informative. Thanks Petar

kaanyolsever
Автор

Excellent presentation! A question came up though, which flavour of gnn layer could we say that graphsage uses for the embedding algorithm? Could the learned weights matrices W be considered as fixed weight inputs of the convolutional GNN layer?

JorGe-euwi