Is Distance Matrix Enough for Geometric Deep Learning? | Zian Li

preview_player
Показать описание


Abstract: Graph Neural Networks (GNNs) are often used for tasks involving the 3D geometry of a given graph, such as molecular dynamics simulation. Although the distance matrix of a geometric graph contains complete geometric information, it has been demonstrated that Message Passing Neural Networks (MPNNs) are insufficient for learning this geometry. In this work, we expand on the families of counterexamples that MPNNs are unable to distinguish from their distance matrices, by constructing families of novel and symmetric geometric graphs, to better understand the inherent limitations of MPNNs. We then propose k-DisGNNs, which can effectively exploit the rich geometry contained in the distance matrix. We demonstrate the high expressive power of k-DisGNNs from three perspectives: 1. They can learn high-order geometric information that cannot be captured by MPNNs. 2. They can unify some existing well-designed geometric models. 3. They are universal function approximators from geometric graphs to scalars (when k≥2) and vectors (when k≥3). Most importantly, we establish a connection between geometric deep learning (GDL) and traditional graph representation learning (GRL), showing that those highly expressive GNN models originally designed for GRL can also be applied to GDL with impressive performance, and that existing complex, equivariant models are not the only solution.

Speaker: Zian Li

~

Chapters

00:00 - Intro & Overview
08:07 - Incompleteness of Vanilla DisGNN
15:13 - k-DisGNNs
18:15 - Extracting High-Order Geometric Information
21:43 - Unifying Invariant Geometric Models
23:46 - Completeness and Universality
39:11 - Experiments
40:49 - Experiments: MD17
42:20 - Experiments: rMD17
43:39 - Experiments: QM9 and Effectiveness of Edge Repr
45:36 - Discussion
50:08 - Q+A
Рекомендации по теме