MIT EI Seminar - Max Welling - Learning equivariant and hybrid message passing on graphs

preview_player
Показать описание
MIT Embodied Intelligence Seminar - May 8, 2020

Speaker: Max Welling - University of Amsterdam and Qualcomm
Title: Learning Equivariant and Hybrid Message Passing on Graphs.

Abstract:
In this talk I will extend graph neural nets in two directions. First, we will ask if we can formulate a GNN on meshes of two dimensional manifolds. Previous approaches mostly used standard GNNs which are invariant to permutations of the input nodes. However, we show this is unnecessarily restrictive. Instead, we define mesh-CNNs which are equivariant and allow more general kernels. Second we will study how to incorporate information about the data generating process into GNNs. Belief propagation is a form of GNN with no learnable parameters that performs inference in a generative graphical model. We subsequently augment BP by a trainable GNN to correct the mistakes of BP, in order to improve predictive performance. Experiments show the increased power of both methods.

Bio: Prof. Dr. Max Welling is a research chair in Machine Learning at the University of Amsterdam and a VP Technologies at Qualcomm. He has a secondary appointment as a senior fellow at the Canadian Institute for Advanced Research (CIFAR). He is co-founder of “Scyfer BV” a university spin-off in deep learning which got acquired by Qualcomm in summer 2017. In the past he held postdoctoral positions at Caltech (’98-’00), UCL (’00-’01) and the U. Toronto (’01-’03). He received his PhD in ’98 under supervision of Nobel laureate Prof. G. ‘t Hooft. Max Welling has served as associate editor in chief of IEEE TPAMI from 2011-2015 (impact factor 4.8). He serves on the board of the NIPS foundation since 2015 (the largest conference in machine learning) and has been program chair and general chair of NIPS in 2013 and 2014 respectively. He was also program chair of AISTATS in 2009 and ECCV in 2016 and general chair of MIDL 2018. He has served on the editorial boards of JMLR and JML and was an associate editor for Neurocomputing, JCGS and TPAMI. He received multiple grants from Google, Facebook, Yahoo, NSF, NIH, NWO and ONR-MURI among which an NSF career grant in 2005. He is recipient of the ECCV Koenderink Prize in 2010. Welling is in the board of the Data Science Research Center in Amsterdam, he directs the Amsterdam Machine Learning Lab (AMLAB), and co-directs the Qualcomm-UvA deep learning lab (QUVA) and the Bosch-UvA Deep Learning lab (DELTA). Max Welling has over 250 scientific publications in machine learning, computer vision, statistics and physics and an h-index of 62.
Рекомендации по теме
Комментарии
Автор

This is a great talk. Personally I haven't had the prerequisite for manifold learning, but the idea behind hybrid message passing is quite profound. Just wandering: if you have a Bayesian GNN where the prior encodes the linear assumption, would that be equivalent to the GNN + PGM model presented here? or is there a limit to the expressiveness of a Bayesian prior?

fredxu