Estimation and control using sampling‐based Bayesian reinforcement learning
Real‐world autonomous systems operate under uncertainty about both their pose and dynamics. Autonomous control systems must simultaneously perform estimation and control tasks to maintain robustness to changing dynamics or modelling errors. However, information gathering actions often conflict with...
Saved in:
Published in: | IET cyber-physical systems Vol. 5; no. 1; pp. 127 - 135 |
---|---|
Main Authors: | , , |
Format: | Journal Article |
Language: | English |
Published: |
Southampton
The Institution of Engineering and Technology
01-03-2020
John Wiley & Sons, Inc Wiley |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Real‐world autonomous systems operate under uncertainty about both their pose and dynamics. Autonomous control systems must simultaneously perform estimation and control tasks to maintain robustness to changing dynamics or modelling errors. However, information gathering actions often conflict with optimal actions for reaching control objectives, requiring a trade‐off between exploration and exploitation. The specific problem setting considered here is for discrete‐time non‐linear systems, with process noise, input‐constraints, and parameter uncertainty. This study frames this problem as a Bayes‐adaptive Markov decision process and solves it online using Monte Carlo tree search with an unscented Kalman filter to account for process noise and parameter uncertainty. This method is compared with certainty equivalent model predictive control and a tree search method that approximates the QMDP solution, providing insight into when information gathering is useful. Discrete time simulations characterise performance over a range of process noise and bounds on unknown parameters. An offline optimisation method is used to select the Monte Carlo tree search parameters without hand‐tuning. In lieu of recursive feasibility guarantees, a probabilistic bounding heuristic is offered that increases the probability of keeping the state within a desired region. |
---|---|
ISSN: | 2398-3396 2398-3396 |
DOI: | 10.1049/iet-cps.2019.0045 |