Reinforcement Learning Approach to Autonomous PID Tuning
Many industrial processes utilize proportional-integral-derivative (PID) controllers due to their practicality and often satisfactory performance. The proper controller parameters depend highly on the operational conditions and process uncertainties. This dependence brings the necessity of frequent...
Saved in:
Published in: | 2022 American Control Conference (ACC) pp. 2691 - 2696 |
---|---|
Main Authors: | , , , , , , , , |
Format: | Conference Proceeding |
Language: | English |
Published: |
American Automatic Control Council
08-06-2022
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Many industrial processes utilize proportional-integral-derivative (PID) controllers due to their practicality and often satisfactory performance. The proper controller parameters depend highly on the operational conditions and process uncertainties. This dependence brings the necessity of frequent tuning for real-time control problems due to process drifts and operational condition changes. This study combines the recent developments in computer sciences and control theory to address the tuning problem. It formulates the PID tuning problem as a reinforcement learning task with constraints. The proposed scheme identifies an initial approximate step-response model and lets the agent learn dynamics off-line from the model with minimal effort. After achieving a satisfactory training performance on the model, the agent is fine-tuned on-line on the actual process to adapt to the real dynamics, thereby minimizing the training time on the real process and avoiding unnecessary wear, which can be beneficial for industrial applications. This sample efficient method is applied to a pilot-scale multi-modal tank system. The performance of the method is demonstrated by setpoint tracking and disturbance regulatory experiments. |
---|---|
ISSN: | 2378-5861 |
DOI: | 10.23919/ACC53348.2022.9867687 |