Accelerating optimal power flow with GPUs: SIMD abstraction of nonlinear programs and condensed-space interior-point methods

This paper introduces a framework for solving alternating current optimal power flow (ACOPF) problems using graphics processing units (GPUs). While GPUs have demonstrated remarkable performance in various computing domains, their application in ACOPF has been limited due to challenges associated wit...

Full description

Saved in:
Bibliographic Details
Published in:Electric power systems research Vol. 236; p. 110651
Main Authors: Shin, Sungho, Anitescu, Mihai, Pacaud, François
Format: Journal Article
Language:English
Published: Elsevier B.V 01-11-2024
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This paper introduces a framework for solving alternating current optimal power flow (ACOPF) problems using graphics processing units (GPUs). While GPUs have demonstrated remarkable performance in various computing domains, their application in ACOPF has been limited due to challenges associated with porting sparse automatic differentiation (AD) and sparse linear solver routines to GPUs. We address these issues with two key strategies. First, we utilize a single-instruction, multiple-data abstraction of nonlinear programs. This approach enables the specification of model equations while preserving their parallelizable structure and, in turn, facilitates the parallel AD implementation. Second, we employ a condensed-space interior-point method (IPM) with an inequality relaxation. This technique involves condensing the Karush–Kuhn–Tucker (KKT) system into a positive definite system. This strategy offers the key advantage of being able to factorize the KKT matrix without numerical pivoting, which has hampered the parallelization of the IPM algorithm. By combining these strategies, we can perform the majority of operations on GPUs while keeping the data residing in the device memory only. Comprehensive numerical benchmark results showcase the advantage of our approach. Remarkably, our implementations—MadNLP.jl and ExaModels.jl—running on NVIDIA GPUs achieve an order of magnitude speedup compared with state-of-the-art tools running on contemporary CPUs. •We present a framework for solving nonlinear programming (NLP) on GPUs, utilizing SIMD abstraction and a condensed-space interior-point method (IPM), specially designed for running on GPUs.•Benchmark results show that the proposed GPU-based approach provides up to 10x speedup compared to the state-of-the-art CPU-based tools for large-scale ACOPF problems, particularly effective when variable numbers exceed 20,000.•While the method successfully accelerates computations, achieving solution precision of 10−4, the increased condition number of the condensed KKT system limits final accuracy. Further research is required to enhance precision up to 10−8 for broader applications.•The proposed methods are implemented in open-source packages, ExaModels.jl and MadNLP.jl.
ISSN:0378-7796
DOI:10.1016/j.epsr.2024.110651