Towards Good Validation Metrics for Generative Models | Christopher Beckham

preview_player
Показать описание


Title: Towards Good Validation Metrics for Generative Models in Offline Model-Based Optimisation

Abstract: In this work we propose a principled evaluation framework for model-based optimisation to measure how well a generative model can extrapolate. We achieve this by interpreting the training and validation splits as draws from their respective ‘truncated’ ground truth distributions, where examples in the validation set contain scores much larger than those in the training set. Model selection is performed on the validation set for some prescribed validation metric. A major research question
however is in determining what validation metric correlates best with the expected value of generated candidates with respect to the ground truth oracle; work towards answering this question can translate to large economic gains since it is expensive to evaluate the ground truth oracle in the real world. We compare various validation metrics for generative adversarial networks using our framework. We also discuss limitations with our framework with respect to existing datasets and how progress
can be made to mitigate them.

~

Chapters:

00:00 - Intro
03:17 - Model-based Optimization (MBO) & the Naive Approach
08:10 - Looking at MBO Through Generative Modelling Lens
17:06 - Refresher on Evaluating Models in ML
19:43 - Evaluation for MBO
21:39 - Types of MBO Datasets
27:31 - When Ground Truth is Not Available
38:51 - Conclusion
40:46 - Q+A
Рекомендации по теме