LoHoRavens: A Long-Horizon Language-Conditioned Benchmark for Robotic Tabletop Manipulation
The convergence of embodied agents and large language models (LLMs) has brought significant advancements to embodied instruction following. Particularly, the strong reasoning capabilities of LLMs make it possible for robots to perform long-horizon tasks without expensive annotated demonstrations. Ho...
Saved in:
Main Authors: | , , , , , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
18-10-2023
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The convergence of embodied agents and large language models (LLMs) has
brought significant advancements to embodied instruction following.
Particularly, the strong reasoning capabilities of LLMs make it possible for
robots to perform long-horizon tasks without expensive annotated
demonstrations. However, public benchmarks for testing the long-horizon
reasoning capabilities of language-conditioned robots in various scenarios are
still missing. To fill this gap, this work focuses on the tabletop manipulation
task and releases a simulation benchmark, \textit{LoHoRavens}, which covers
various long-horizon reasoning aspects spanning color, size, space, arithmetics
and reference. Furthermore, there is a key modality bridging problem for
long-horizon manipulation tasks with LLMs: how to incorporate the observation
feedback during robot execution for the LLM's closed-loop planning, which is
however less studied by prior work. We investigate two methods of bridging the
modality gap: caption generation and learnable interface for incorporating
explicit and implicit observation feedback to the LLM, respectively. These
methods serve as the two baselines for our proposed benchmark. Experiments show
that both methods struggle to solve some tasks, indicating long-horizon
manipulation tasks are still challenging for current popular models. We expect
the proposed public benchmark and baselines can help the community develop
better models for long-horizon tabletop manipulation tasks. |
---|---|
DOI: | 10.48550/arxiv.2310.12020 |