Transferring Dexterous Manipulation from GPU Simulation to a Remote Real-World TriFinger

In-hand manipulation of objects is an important capability to enable robots to carry-out tasks which demand high levels of dexterity. This work presents a robot systems approach to learning dexterous manipulation tasks involving moving objects to arbitrary 6-DoF poses. We show empirical benefits, bo...

Full description

Saved in:
Bibliographic Details
Published in:2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) pp. 11802 - 11809
Main Authors: Allshire, Arthur, MittaI, Mayank, Lodaya, Varun, Makoviychuk, Viktor, Makoviichuk, Denys, Widmaier, Felix, Wuthrich, Manuel, Bauer, Stefan, Handa, Ankur, Garg, Animesh
Format: Conference Proceeding
Language:English
Published: IEEE 23-10-2022
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In-hand manipulation of objects is an important capability to enable robots to carry-out tasks which demand high levels of dexterity. This work presents a robot systems approach to learning dexterous manipulation tasks involving moving objects to arbitrary 6-DoF poses. We show empirical benefits, both in simulation and sim - to- real transfer, of using keypoint-based representations for object pose in policy observations and reward calculation to train a model-free reinforcement learning agent. By utilizing domain randomization strategies and large-scale training, we achieve a high success rate of 83 % on a real TriFinger system, with a single policy able to perform grasping, ungrasping, and finger gaiting in order to achieve arbitrary poses within the workspace. We demonstrate that our policy can generalise to unseen objects, and success rates can be further improved through finetuning. With the aim of assisting further research in learning in-hand manipulation, we provide a detailed exposition of our system and make the codebase of our system available, along with checkpoints trained on billions of steps of experience, at https://s2r2-ig.github.io
ISSN:2153-0866
DOI:10.1109/IROS47612.2022.9981458