filmov
tv
【EP11】Improving Robustness to Distribution Shifts: Methods and Benchmarks
Показать описание
#computervision
**Title**
Improving Robustness to Distribution Shifts: Methods and Benchmarks
**Abstract**
Machine learning models deployed in the real world constantly face distribution shifts, yet current models are not robust to these shifts; they can perform well when the train and test distributions are identical, but still have their performance plummet when evaluated on a different test distribution. In this talk, I will discuss methods and benchmarks for improving robustness to distribution shifts. First, we consider the problem of spurious correlations and show how to mitigate it with a combination of distributionally robust optimization (DRO) and controlling model complexity---e.g., through strong L2 regularization, early stopping, or underparameterization. Second, we present WILDS, a curated and diverse collection of 10 datasets with real-world distribution shifts, that aims to address the under-representation of real-world shifts in the datasets widely used in the ML community today. We observe that existing methods fail to mitigate performance drops due to distribution shifts in WILDS, even though these methods have been successful on existing benchmarks with different types of distribution shifts. This underscores the importance of developing and evaluating methods on diverse types of distribution shifts, including directly on shifts that arise in practice.
**Speaker**
Shiori Sagawa is a fourth-year PhD student at Stanford University, advised by Percy Liang. She studies robustness to distribution shifts, and to this end, she has developed methods based on distributionally robust optimization, analyzed these algorithms in the context of deep learning models, and recently built a benchmark on distribution shifts in the wild. She is an Apple PhD Scholar in AI/ML.
**Title**
Improving Robustness to Distribution Shifts: Methods and Benchmarks
**Abstract**
Machine learning models deployed in the real world constantly face distribution shifts, yet current models are not robust to these shifts; they can perform well when the train and test distributions are identical, but still have their performance plummet when evaluated on a different test distribution. In this talk, I will discuss methods and benchmarks for improving robustness to distribution shifts. First, we consider the problem of spurious correlations and show how to mitigate it with a combination of distributionally robust optimization (DRO) and controlling model complexity---e.g., through strong L2 regularization, early stopping, or underparameterization. Second, we present WILDS, a curated and diverse collection of 10 datasets with real-world distribution shifts, that aims to address the under-representation of real-world shifts in the datasets widely used in the ML community today. We observe that existing methods fail to mitigate performance drops due to distribution shifts in WILDS, even though these methods have been successful on existing benchmarks with different types of distribution shifts. This underscores the importance of developing and evaluating methods on diverse types of distribution shifts, including directly on shifts that arise in practice.
**Speaker**
Shiori Sagawa is a fourth-year PhD student at Stanford University, advised by Percy Liang. She studies robustness to distribution shifts, and to this end, she has developed methods based on distributionally robust optimization, analyzed these algorithms in the context of deep learning models, and recently built a benchmark on distribution shifts in the wild. She is an Apple PhD Scholar in AI/ML.