Hyperparameter Tuning of Neural Network for High-Dimensional Problems in the Case of Helmholtz Equation
In this work, we study the effectiveness of common hyperparameter optimization (HPO) methods for physics-informed neural networks (PINNs) with an application to the multidimensional Helmholtz problem. The network was built using the PyTorch framework without the use of specialized PINN-oriented libr...
Saved in:
Published in: | Moscow University physics bulletin Vol. 78; no. Suppl 1; pp. S243 - S255 |
---|---|
Main Authors: | , |
Format: | Journal Article |
Language: | English |
Published: |
Moscow
Pleiades Publishing
01-12-2023
Springer Nature B.V |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | In this work, we study the effectiveness of common hyperparameter optimization (HPO) methods for physics-informed neural networks (PINNs) with an application to the multidimensional Helmholtz problem. The network was built using the PyTorch framework without the use of specialized PINN-oriented libraries. We investigate the effect of hyperparameters on the NN model’s performance and conduct automatic hyperparameter optimization using different combinations of search algorithms and trial schedulers. We chose the open-source HPO framework, Ray Tune, which provides a unified interface for many HPO packages, as our HPO tool. We consider two search algorithms: random search and the Bayesian method based on the tree-structured Parzen estimator (TPE), in implementations by hyperopt and hpbandster, along with the asynchronous successive halving (ASHA) early-stopping algorithm. For our problem, enabling the early-stopping algorithm is shown to achieve faster HPO convergence speed than switching from random search to the Bayesian method. |
---|---|
ISSN: | 0027-1349 1934-8460 |
DOI: | 10.3103/S0027134923070263 |