Learning Koopman Dynamics for Safe Legged Locomotion with Reinforcement Learning-based Controller
Learning-based algorithms have demonstrated impressive performance in agile locomotion of legged robots. However, learned policies are often complex and opaque due to the black-box nature of learning algorithms, which hinders predictability and precludes guarantees on performance or safety. In this...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
23-09-2024
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Learning-based algorithms have demonstrated impressive performance in agile
locomotion of legged robots. However, learned policies are often complex and
opaque due to the black-box nature of learning algorithms, which hinders
predictability and precludes guarantees on performance or safety. In this work,
we develop a novel safe navigation framework that combines Koopman operators
and model-predictive control (MPC) frameworks. Our method adopts Koopman
operator theory to learn the linear evolution of dynamics of the underlying
locomotion policy, which can be effectively learned with Dynamic Mode
Decomposition (DMD). Given that our learned model is linear, we can readily
leverage the standard MPC algorithm. Our framework is easy to implement with
less prior knowledge because it does not require access to the underlying
dynamical systems or control-theoretic techniques. We demonstrate that the
learned linear dynamics can better predict the trajectories of legged robots
than baselines. In addition, we showcase that the proposed navigation framework
can achieve better safety with less collisions in challenging and dense
environments with narrow passages. |
---|---|
DOI: | 10.48550/arxiv.2409.14736 |