Leveraging Trust for Joint Multi-Objective and Multi-Fidelity Optimization

IOP Machine Learning: Science and Technology (2024) In the pursuit of efficient optimization of expensive-to-evaluate systems, this paper investigates a novel approach to Bayesian multi-objective and multi-fidelity (MOMF) optimization. Traditional optimization methods, while effective, often encount...

Full description

Saved in:
Bibliographic Details
Main Authors: Irshad, Faran, Karsch, Stefan, Döpp, Andreas
Format: Journal Article
Language:English
Published: 28-06-2023
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:IOP Machine Learning: Science and Technology (2024) In the pursuit of efficient optimization of expensive-to-evaluate systems, this paper investigates a novel approach to Bayesian multi-objective and multi-fidelity (MOMF) optimization. Traditional optimization methods, while effective, often encounter prohibitively high costs in multi-dimensional optimizations of one or more objectives. Multi-fidelity approaches offer potential remedies by utilizing multiple, less costly information sources, such as low-resolution simulations. However, integrating these two strategies presents a significant challenge. We suggest the innovative use of a trust metric to support simultaneous optimization of multiple objectives and data sources. Our method modifies a multi-objective optimization policy to incorporate the trust gain per evaluation cost as one objective in a Pareto optimization problem, enabling simultaneous MOMF at lower costs. We present and compare two MOMF optimization methods: a holistic approach selecting both the input parameters and the trust parameter jointly, and a sequential approach for benchmarking. Through benchmarks on synthetic test functions, our approach is shown to yield significant cost reductions - up to an order of magnitude compared to pure multi-objective optimization. Furthermore, we find that joint optimization of the trust and objective domains outperforms addressing them in sequential manner. We validate our results using the use case of optimizing laser-plasma acceleration simulations, demonstrating our method's potential in Pareto optimization of high-cost black-box functions. Implementing these methods in existing Bayesian frameworks is simple, and they can be readily extended to batch optimization. With their capability to handle various continuous or discrete fidelity dimensions, our techniques offer broad applicability in solving simulation problems in fields such as plasma physics and fluid dynamics.
DOI:10.48550/arxiv.2112.13901