Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification

preview_player
Показать описание
As we begin deploying machine learning models in consequential settings like medical diagnostics or self-driving vehicles, knowing a model's accuracy is not enough. We need a way of quantifying an algorithm's uncertainty for a particular test-time instance while rigorously guaranteeing that consequential errors don't happen too frequently (for example, that the car doesn't hit a human). I'll discuss how to generate rigorous, finite-sample confidence intervals for any prediction task, model and dataset for free.
Рекомендации по теме
Комментарии
Автор

Presentation by: Anastasios Angelopoulos
Works mention in presentation or relevant links:
Angelopoulos, A. N., & Bates, S. (2021). A gentle introduction to conformal prediction and distribution-free uncertainty quantification. arXiv preprint arXiv:2107.07511.

Foygel Barber, R., Candes, E. J., Ramdas, A., & Tibshirani, R. J. (2022). Conformal prediction beyond exchangeability. arXiv e-prints, arXiv-2202.

C. FannjiangS. BatesA. N. AngelopoulosJ. ListgartenM. I. Jordan Conformal Prediction for the Design Problem. 2022.

Tibshirani, R. J., Foygel Barber, R., Candes, E., & Ramdas, A. (2019). Conformal prediction under covariate shift. Advances in neural information processing systems, 32.

Tibshirani, R. J., Barber, R. F., Candès, E. J., & Ramdas, A. Supplement to “Conformal Prediction Under Covariate Shift”.

NewPool