Towards Reliable Machine Learning via Distributional Robustness

preview_player
Показать описание
A Google TechTalk, presented by Hongseok Namkoong, 2021/05/04
ABSTRACT: The standard ML paradigm optimizing average-case performance produces models that do poorly under distribution shift. We propose a distributionally robust stochastic optimization (DRO) framework over shifts in the data-generating distribution, and develop efficient procedures that can guarantee a uniform level of performance over subpopulations. By leveraging connections to causal learning, our methods interpolate smoothly between shifts in the covariate distribution (X), to shifts in unobserved confounders (Y | X). We characterize the trade-off between distributional robustness and sample complexity, and prove that our procedure achieves the optimal trade-off. Empirically, our procedure improves tail performance, and maintains good performance on subpopulations even over time.

Bio: Hongseok Namkoong is an Assistant Professor in the Decision, Risk, and Operations division in the Graduate School of Business at Columbia University. His research interests lie at the interface of machine learning, operations research, and statistics, with a particular emphasis on developing reliable machine learning methods for decision-making problems. Hong is a recipient of several awards and fellowships, including best paper awards at the Neural Information Processing Systems conference and the International Conference on Machine Learning (runner-up), and the best student paper award from the INFORMS Applied Probability Society. He received his Ph.D. from Stanford University where he was jointly advised by John Duchi and Peter Glynn, and worked as a research scientist at Facebook Core Data Science before joining Columbia.
Рекомендации по теме