Sam Ritter: Meta-Learning to Make Smart Inferences from Small Data

preview_player
Показать описание
Deep learning methods have enabled enormous gains in predictive accuracy when large labeled datasets are available; however, they are not applicable in settings where only a few relevant data points can be obtained. In this talk, I will discuss meta-learning: the process whereby a learning system acquires background knowledge that enables it to later make powerful inferences from only a few examples. This old idea from psychology and computer science has recently resurfaced in the context of modern deep learning, producing stunning advances in the low-shot learning capabilities of neural networks, in both supervised and reinforcement learning settings. The talk will cover foundational concepts of meta-learning, key seminal results on meta-learning and meta-reinforcement learning with deep networks, interpretability of meta-learning systems, and -as time permits- the current frontier of meta-learning research.

Bio: Sam is a research scientist at DeepMind and PhD candidate at the Princeton Neuroscience Institute. His work at the intersection of neuroscience and deep learning has the twin objectives of understanding human cognition and building useful intelligent systems. His recent work focuses on meta-reinforcement learning, deep learning interpretability, and episodic memory in deep reinforcement learning agents. Sam is a former Graduate Fellow of the US National Science Foundation and his work has been covered in the Economist, the Wall Street Journal, and other venues.

*Sponsors*
Man AHL: At Man AHL, we mix machine learning, computer science and engineering with terabytes of data to invest billions of dollars every day.

Рекомендации по теме