Asymmetric self-play for automatic goal discovery in robotic manipulation
We train a single, goal-conditioned policy that can solve many robotic manipulation tasks, including tasks with previously unseen goals and objects. We rely on asymmetric self-play for goal discovery, where two agents, Alice and Bob, play a game. Alice is asked to propose challenging goals and Bob a...
Saved in:
Main Authors: | , , , , , , , , , , , , , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
13-01-2021
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | We train a single, goal-conditioned policy that can solve many robotic
manipulation tasks, including tasks with previously unseen goals and objects.
We rely on asymmetric self-play for goal discovery, where two agents, Alice and
Bob, play a game. Alice is asked to propose challenging goals and Bob aims to
solve them. We show that this method can discover highly diverse and complex
goals without any human priors. Bob can be trained with only sparse rewards,
because the interaction between Alice and Bob results in a natural curriculum
and Bob can learn from Alice's trajectory when relabeled as a goal-conditioned
demonstration. Finally, our method scales, resulting in a single policy that
can generalize to many unseen tasks such as setting a table, stacking blocks,
and solving simple puzzles. Videos of a learned policy is available at
https://robotics-self-play.github.io. |
---|---|
DOI: | 10.48550/arxiv.2101.04882 |