What Makes Pre-Trained Visual Representations Successful for Robust Manipulation?
Inspired by the success of transfer learning in computer vision, roboticists have investigated visual pre-training as a means to improve the learning efficiency and generalization ability of policies learned from pixels. To that end, past work has favored large object interaction datasets, such as f...
Saved in:
Main Authors: | , , , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
03-11-2023
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Inspired by the success of transfer learning in computer vision, roboticists
have investigated visual pre-training as a means to improve the learning
efficiency and generalization ability of policies learned from pixels. To that
end, past work has favored large object interaction datasets, such as
first-person videos of humans completing diverse tasks, in pursuit of
manipulation-relevant features. Although this approach improves the efficiency
of policy learning, it remains unclear how reliable these representations are
in the presence of distribution shifts that arise commonly in robotic
applications. Surprisingly, we find that visual representations designed for
manipulation and control tasks do not necessarily generalize under subtle
changes in lighting and scene texture or the introduction of distractor
objects. To understand what properties do lead to robust representations, we
compare the performance of 15 pre-trained vision models under different visual
appearances. We find that emergent segmentation ability is a strong predictor
of out-of-distribution generalization among ViT models. The rank order induced
by this metric is more predictive than metrics that have previously guided
generalization research within computer vision and machine learning, such as
downstream ImageNet accuracy, in-domain accuracy, or shape-bias as evaluated by
cue-conflict performance. We test this finding extensively on a suite of
distribution shifts in ten tasks across two simulated manipulation
environments. On the ALOHA setup, segmentation score predicts real-world
performance after offline training with 50 demonstrations. |
---|---|
DOI: | 10.48550/arxiv.2312.12444 |