Efficient FPGA Parallelization of Lipschitz Interpolation for Real-Time Decision-Making

One of the main open challenges in the field of learning-based control is the design of computing architectures able to process data in an efficient way. This is of particular importance when time constraints must be met, as, for instance, in real-time decision-making systems operating at high frequ...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on control systems technology Vol. 30; no. 5; pp. 2163 - 2175
Main Authors: Nadales, J. M., Manzano, J. M., Barriga, A., Limon, D.
Format: Journal Article
Language:English
Published: New York IEEE 01-09-2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:One of the main open challenges in the field of learning-based control is the design of computing architectures able to process data in an efficient way. This is of particular importance when time constraints must be met, as, for instance, in real-time decision-making systems operating at high frequencies or when a vast amount of data must be processed. In this respect, field-programmable gate array (FPGA)-based parallel processing architectures have been hailed as a potential solution to this problem. In this article, a low-level design methodology for the implementation on FPGA platforms of Lipschitz interpolation (LI) algorithms is presented. The proposed design procedure exploits the potential parallelism of the LI algorithm and allows the user to optimize the area and energy resources of the resulting implementation. Besides, the proposed design allows to know in advance a tight bound of the error committed by the FPGA due to the representation format. Therefore, the resulting implementation is a highly parallelized and a fast architecture with an optimal use of the resources and consumption and with a fixed numerical error bound. These facts flawlessly suit the desirable specifications of learning-based control devices. As an illustrative case study, the proposed algorithm and architecture have been used to learn a nonlinear model predictive control law applied to self-balance a two-wheel robot. The results show how computational times are several orders of magnitude reduced by employing the proposed parallel architecture, rather than sequentially running the algorithm on an embedded ARM-CPU-based platform.
ISSN:1063-6536
1558-0865
DOI:10.1109/TCST.2021.3136616