Implicit regularization for general norms and errors - Lorenzo Rosasco, MIT

preview_player
Показать описание
Implicit regularization refers to the property of optimization methods to bias the search of
solutions towards those with some small norm and ensure stability of the estimation
process. While this idea is classic when considering Euclidean norms and quadratic error,
much less is known for more general choices. In this talk we will discuss several results in
this direction with an emphasis on accelerated optimization techniques.

---

Recent years have witnessed an increased cross-fertilisation between the fields of statistics and computer science. In the era of Big Data, statisticians are increasingly facing the question of guaranteeing prescribed levels of inferential accuracy within certain time budget. On the other hand, computer scientists are progressively modelling data as noisy measurements coming from an underlying population, exploiting the statistical regularities of the data to save on computation.

This cross-fertilisation has led to the development and understanding of many of the algorithmic paradigms that underpin modern machine learning, including gradient descent methods and generalisation guarantees, implicit regularisation strategies, high-dimensional statistical models and algorithms.

About the event

This event will bring together experts to talk about advances at the intersection of statistics and computer science in machine learning. This two-day conference will focus on the underlying theory and the links with applications, and will feature 12 talks by leading international researchers.

The intended audience is faculty, postdoctoral researchers and Ph.D. students from the UK/EU, in order to introduce them to this area of research and to the Turing.
Рекомендации по теме