Optimal Deep Learning for Robot Touch: Training Accurate Pose Models of 3D Surfaces and Edges

This article illustrates the application of deep learning to robot touch by considering a basic yet fundamental capability: estimating the relative pose of part of an object in contact with a tactile sensor. We begin by surveying deep learning applied to tactile robotics, focusing on optical tactile...

Full description

Saved in:
Bibliographic Details
Published in:IEEE robotics & automation magazine Vol. 27; no. 2; pp. 66 - 77
Main Authors: Lepora, Nathan F., Lloyd, John
Format: Journal Article
Language:English
Published: New York IEEE 01-06-2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This article illustrates the application of deep learning to robot touch by considering a basic yet fundamental capability: estimating the relative pose of part of an object in contact with a tactile sensor. We begin by surveying deep learning applied to tactile robotics, focusing on optical tactile sensors, which help to link touch and deep learning for vision. We then show how deep learning can be used to train accurate pose models of 3D surfaces and edges that are insensitive to nuisance variables, such as motion-dependent shear. This involves including representative motions as unlabeled perturbations of the training data and using Bayesian optimization of the network and training hyperparameters to find the most accurate models. Accurate estimation of the pose from touch will enable robots to safely and precisely control their physical interactions, facilitating a wide range of object exploration and manipulation tasks.
ISSN:1070-9932
1558-223X
DOI:10.1109/MRA.2020.2979658