Geometric Deep Learning (Part 2)

preview_player
Показать описание

See Part 1 at:

Slides can be found on:

0:00 Recap on Graph Neural Networks
3:12 Mathematical Definition of a GNN
6:13 Mathematical Definition of 1-hop neighbourhood and features
8:22 Permutation Matrix
9:52 Permutation Invariance vs Equivariance
11:43 Generic GNN Equation
19:30 Elaboration of Permutation Equivariance
23:03 GNN Overall Architecture
27:07 GNN Architecture Explained
28:15 Three flavours of GNNs
36:10 Further elaboration of GNN feature updating process
42:00 GNNs are not very computationally efficient in current hardware
44:13 Transformers are a form of Graph Attentional Networks
48:38 Sequence Info as Positional Embedding in Transformers
51:40 Reinforcement Learning (RL) is very data hungry
52:41 Exploiting equivariance/invariance in data
55:00 Homeomorphisms in Markov Decision Process
55:25 Exploiting equivariance/invariance in data allows learning from fewer interactions
56:26 Summary and Insights!
Рекомендации по теме
Комментарии
Автор

In 38:00 - 43:00, I tried to link GCNs to CNNs, but I made a mistake. It is not psi(x_j) that are the filter weights, but it should be c_ij as the filter weights for in a traditional CNN instead. To make a GCN a CNN, we have to define our adjacency matrix such that the adjacent pixels are always in the same orientation for each node/pixel. Hence, the permutation invariance property of GNNs does not hold as pixel position is important for a CNN filter.

For more information, refer to:

johntanchongmin