LVIS: Learning from Value Function Intervals for Contact-Aware Robot Controllers
Guided policy search is a popular approach for training controllers for high-dimensional systems, but it has a number of pitfalls. Non-convex trajectory optimization has local minima, and non-uniqueness in the optimal policy itself can mean that independently-optimized samples do not describe a cohe...
Saved in:
Main Authors: | , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
15-09-2018
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Guided policy search is a popular approach for training controllers for
high-dimensional systems, but it has a number of pitfalls. Non-convex
trajectory optimization has local minima, and non-uniqueness in the optimal
policy itself can mean that independently-optimized samples do not describe a
coherent policy from which to train. We introduce LVIS, which circumvents the
issue of local minima through global mixed-integer optimization and the issue
of non-uniqueness through learning the optimal value function (or cost-to-go)
rather than the optimal policy. To avoid the expense of solving the
mixed-integer programs to full global optimality, we instead solve them only
partially, extracting intervals containing the true cost-to-go from early
termination of the branch-and-bound algorithm. These interval samples are used
to weakly supervise the training of a neural net which approximates the true
cost-to-go. Online, we use that learned cost-to-go as the terminal cost of a
one-step model-predictive controller, which we solve via a small mixed-integer
optimization. We demonstrate the LVIS approach on a cart-pole system with walls
and a planar humanoid robot model and show that it can be applied to a
fundamentally hard problem in feedback control--control through contact. |
---|---|
DOI: | 10.48550/arxiv.1809.05802 |