Discrete multi-fidelity optimization

preview_player
Показать описание
This video is #9 in the Adaptive Experimentation series presented at the 18th IEEE Conference on eScience in Salt Lake City, UT (October 10-14, 2022). In this video, Sterling Baird @sterling-baird presents on discrete multi-fidelity optimization. In discrete multi-fidelity optimization, the models or simulations are combined in a discrete manner, meaning that the optimization process switches between using different models or simulations at specific points in the optimization process. For example, a high-fidelity model might be used at the beginning of the optimization process to get a rough idea of the solution, and then a lower-fidelity model might be used to refine the solution. Stay tuned for the next video in this series covering offline optimization.

and

0:00 discrete vs continuous multi-fidelity optimization vs multi-task
1:18 problem setup, helper functions
2:45 running optimization and comparing to expected improvement
3:20 examples from literature
4:10 multi-task BO via Ax
8:18 visualize simulator bias
8:53 BO loop
13:48 examples from literature
Рекомендации по теме
Комментарии
Автор

Hello Dr Baird,
Thank you so much for creating this informative playlist. I have gone through the whole playlist to learn about mulit fidelity bayesian optimization in an attempt to apply it to my research work. I am very new to this field and one thing I am struggling with is trying to apply these concepts into a real world application where we might want to apply multi fidelity BO on a parameter search space problem. A particular example might be where we want to explore a certain parameter configuration space for running a scientific application to find out which of those parameters give the least runtime. And we do this optimization based on a dataset where we have a mapping of the different configuration space to its runtime rather than hyperparameter tuning approach where we iteratively run a NN to get an observed metric against which we optimize. Will it be possible to show such a level optimization implementation through a tutorial?

Hoe-ssain