Sindy Löwe: Putting An End to End-to-End

preview_player
Показать описание

Abstract: Modern deep learning models are typically optimized using end-to-end backpropagation and a global, supervised loss function. Although empirically proven to be highly successful, this approach is considered biologically implausible: the brain does not optimize a global objective by backpropagating error signals. Instead, it is highly modular and learns predominantly based on local information. In this talk, I will present Greedy Infomax - a local self-supervised representation learning approach that is inspired by this local learning in the brain. I will demonstrate how Greedy InfoMax enables us to train a neural network without labels and without end-to-end backpropagation, while achieving highly competitive results on downstream classification tasks. Last, but not least, I will outline how this local learning allows us to asynchronously optimize individual subparts of a neural network and how to distribute this optimization across devices.

Bio: Sindy Löwe is a PhD student in Machine Learning at the University of Amsterdam, supervised by Prof. Max Welling. Currently, she is interning with the Google Brain team in Berlin. With her paper 'Putting An End to End-to-End: Gradient-Isolated Learning of Representations', she received an Honorable Mention for the Outstanding New Directions Paper Award at NeurIPS last year. Before finding her way into Machine Learning, she completed a BSc in Cognitive Science at the University of Tübingen and worked at the Max Planck Institute for Biological Cybernetics investigating the inner workings of the visual cortex.

Moderated by: Jon Hare
Рекомендации по теме
welcome to shbcf.ru