High-dimensional Bayesian optimization using low-dimensional feature spaces
Bayesian optimization (BO) is a powerful approach for seeking the global optimum of expensive black-box functions and has proven successful for fine tuning hyper-parameters of machine learning models. However, BO is practically limited to optimizing 10--20 parameters. To scale BO to high dimensions,...
Saved in:
Main Authors: | , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
27-02-2019
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Bayesian optimization (BO) is a powerful approach for seeking the global
optimum of expensive black-box functions and has proven successful for fine
tuning hyper-parameters of machine learning models. However, BO is practically
limited to optimizing 10--20 parameters. To scale BO to high dimensions, we
usually make structural assumptions on the decomposition of the objective
and\slash or exploit the intrinsic lower dimensionality of the problem, e.g. by
using linear projections. We could achieve a higher compression rate with
nonlinear projections, but learning these nonlinear embeddings typically
requires much data. This contradicts the BO objective of a relatively small
evaluation budget. To address this challenge, we propose to learn a
low-dimensional feature space jointly with (a) the response surface and (b) a
reconstruction mapping. Our approach allows for optimization of BO's
acquisition function in the lower-dimensional subspace, which significantly
simplifies the optimization problem. We reconstruct the original parameter
space from the lower-dimensional subspace for evaluating the black-box
function. For meaningful exploration, we solve a constrained optimization
problem. |
---|---|
DOI: | 10.48550/arxiv.1902.10675 |