Efficient Algorithms for High Dimensional Robust Learning

preview_player
Показать описание
We study high-dimensional estimation in a setting where an adversary is allowed to arbitrarily corrupt an $\varepsilon$-fraction of the samples. Such questions have a rich history spanning statistics, machine learning and theoretical computer science. Even in the most basic settings, for over fifty years the only known approaches were either computationally inefficient or lose dimension-dependent factors in their error guarantees. This raises the following question: is high-dimensional robust estimation even possible, algorithmically?

In this talk, we present the first computationally efficient algorithms with dimension-independent error guarantees for (1) robustly estimating the mean and covariance of a Gaussian, (2) robustly estimating the mean of a distribution with bounded second moment, and (3) robust stochastic optimization. Our algorithms achieve error that is independent of the dimension, and in many cases scales nearly optimally with the fraction of adversarially corrupted samples. Moreover, we develop a general recipe for detecting and correcting corruptions in high-dimensions, that may be applicable to many other problems. As a specific example, we demonstrate the first known defense for real-world watermarking attacks against neural networks.

Based on joint works with Ilias Diakonikolas, Gautam Kamath, Daniel Kane, Aleksander Madry, Ankur Moitra, Jacob Steinhardt, Alistair Stewart, and Brandon Tran

Рекомендации по теме
Комментарии
Автор

Did not realize Eigenvalue can be used like this.
Pretty cool! Like the speaker

X_platform
Автор

"one iteration runs in like very fast"

ernestskuznecovs
Автор

The guys got no respect for his bank account 40:18

ernestskuznecovs