Imprecise Bayesian optimization
Bayesian optimization (BO) with Gaussian processes (GPs) surrogate models is widely used to optimize analytically unknown and expensive-to-evaluate functions. In this paper, we propose a robust version of BO grounded in the theory of imprecise probabilities: Prior-mean-RObust Bayesian Optimization (...
Saved in:
Published in: | Knowledge-based systems Vol. 300; p. 112186 |
---|---|
Main Authors: | , |
Format: | Journal Article |
Language: | English |
Published: |
Elsevier B.V
27-09-2024
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Bayesian optimization (BO) with Gaussian processes (GPs) surrogate models is widely used to optimize analytically unknown and expensive-to-evaluate functions. In this paper, we propose a robust version of BO grounded in the theory of imprecise probabilities: Prior-mean-RObust Bayesian Optimization (PROBO). Our method is motivated by an empirical and theoretical analysis of the GP prior specifications’ effect on BO’s convergence. A thorough simulation study finds the prior’s mean parameters to have the highest influence on BO’s convergence among all prior components. We thus turn to this part of the prior GP in more detail. In particular, we prove regret bounds for BO under misspecification of GP prior’s mean parameters. We show that sublinear regret bounds become linear under GP misspecification but stay sublinear if the misspecification-induced error is bounded by the variance of the GP. In response to these empirical and theoretical findings, we introduce PROBO as a univariate generalization of BO that avoids prior mean parameter misspecification. This is achieved by explicitly accounting for prior GP mean imprecision via a prior near-ignorance model. We deploy our approach on graphene production, a real-world optimization problem in materials science, and observe PROBO to converge faster than classical BO.11Open Science: Implementations of PROBO and reproducible scripts for the experimental analysis as well as all reported data are available at: https://github.com/rodemann/imprecise-bayesian-optimization.22Some parts of an earlier version of this work have been presented at the Ninth International Symposium on Integrated Uncertainty in Knowledge Modelling and Decision Making (IUKM) and published in the corresponding proceedings [1], see Appendix D for details.
•We study the effect of Gaussian process misspecification on Bayesian optimization (BO).•Prior mean parameters are found to have the highest impact on BO’s convergence.•We prove that prior mean parameter misspecification leads to linear regret bounds.•We propose a robust variant of BO that avoids prior mean parameter misspecification.•This is achieved by deploying imprecise Gaussian processes as surrogate models. |
---|---|
ISSN: | 0950-7051 |
DOI: | 10.1016/j.knosys.2024.112186 |