Improving Long-Horizon Imitation Through Instruction Prediction
Complex, long-horizon planning and its combinatorial nature pose steep challenges for learning-based agents. Difficulties in such settings are exacerbated in low data regimes where over-fitting stifles generalization and compounding errors hurt accuracy. In this work, we explore the use of an often...
Saved in:
Main Authors: | , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
21-06-2023
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Complex, long-horizon planning and its combinatorial nature pose steep
challenges for learning-based agents. Difficulties in such settings are
exacerbated in low data regimes where over-fitting stifles generalization and
compounding errors hurt accuracy. In this work, we explore the use of an often
unused source of auxiliary supervision: language. Inspired by recent advances
in transformer-based models, we train agents with an instruction prediction
loss that encourages learning temporally extended representations that operate
at a high level of abstraction. Concretely, we demonstrate that instruction
modeling significantly improves performance in planning environments when
training with a limited number of demonstrations on the BabyAI and Crafter
benchmarks. In further analysis we find that instruction modeling is most
important for tasks that require complex reasoning, while understandably
offering smaller gains in environments that require simple plans. More details
and code can be found at https://github.com/jhejna/instruction-prediction. |
---|---|
DOI: | 10.48550/arxiv.2306.12554 |