Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification

preview_player
Показать описание
In deep learning and computer vision, it is common for data to present certain. As we begin deploying machine learning models in consequential settings like medical diagnostics or self-driving vehicles, knowing a model's accuracy is not enough. We need a way of quantifying an algorithm's uncertainty for a particular test-time instance while rigorously guaranteeing that consequential errors don't happen too frequently (for example, that the car doesn't hit a human). I'll be discussing how to generate rigorous, finite-sample confidence intervals for any prediction task, any model, and any dataset, for free. This will be a chalk talk where I begin with a short tutorial on a method called conformal prediction and tease a more flexible method that works for a larger class of prediction problems including those with high-dimensional, structured outputs (e.g. instance segmentation, multiclass or hierarchical classification, protein folding, and so on).

00:00 Intro
02:52 Motivation: Downstream decision-making tasks
08:06 Objectives: Exact coverage ; Small size ; Adaptive
10:33 1. Get Score of correct class
11:42 2. Take the ~10% quantle.
13:05 3. Form prediction sets
23:56 Recap 1 - softmax score of true class
30:18 Contoformal Prediction (General case) 0) Identify heuristic notion of uncetuhty 1) Define score function 2) Compute 3) Deploy
32:20 Example: Conformalized Quantile Regression
35:08 IDEA : Inflare by constant to fix their couerenge
39:06 Part 2 - Beyond Simple Problems
41:46 Intuitive Example
48:46 Audience Queston: Is this OK?
58:55 Multi-label classification
01:10:14 Conclusion and Discussion

[Chapters were auto-generated using our proprietary software - contact us if you are interested in access to the software]

The presentation is based on the speaker's papers:

A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification

Uncertainty Sets for Image Classifiers using Conformal Prediction

Distribution-Free, Risk-Controlling Prediction Sets

Learn then Test: Calibrating Predictive Algorithms to Achieve Risk Control

Presenter Bio:

Anastasios Nikolas Angelopoulos, a a third-year Ph.D. student at the University of California, Berkeley, advised by Michael I. Jordan and Jitendra Malik. From 2016 to 2019, he was an electrical engineering student at Stanford University advised by Gordon Wetzstein and Stephen P. Boyd.

-------------------------
Find us at:

Рекомендации по теме
Комментарии
Автор

Would love to hear any comment or input you have. Did you find the content interesting? Is there a place for improvement? Please leave any feedback you have here in a comment :)

ddai