Vision Language Models are In-Context Value Learners
Predicting temporal progress from visual trajectories is important for intelligent robots that can learn, adapt, and improve. However, learning such progress estimator, or temporal value function, across different tasks and domains requires both a large amount of diverse data and methods which can s...
Saved in:
Main Authors: | , , , , , , , , , , , , , , , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
07-11-2024
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Predicting temporal progress from visual trajectories is important for
intelligent robots that can learn, adapt, and improve. However, learning such
progress estimator, or temporal value function, across different tasks and
domains requires both a large amount of diverse data and methods which can
scale and generalize. To address these challenges, we present Generative Value
Learning (\GVL), a universal value function estimator that leverages the world
knowledge embedded in vision-language models (VLMs) to predict task progress.
Naively asking a VLM to predict values for a video sequence performs poorly due
to the strong temporal correlation between successive frames. Instead, GVL
poses value estimation as a temporal ordering problem over shuffled video
frames; this seemingly more challenging task encourages VLMs to more fully
exploit their underlying semantic and temporal grounding capabilities to
differentiate frames based on their perceived task progress, consequently
producing significantly better value predictions. Without any robot or task
specific training, GVL can in-context zero-shot and few-shot predict effective
values for more than 300 distinct real-world tasks across diverse robot
platforms, including challenging bimanual manipulation tasks. Furthermore, we
demonstrate that GVL permits flexible multi-modal in-context learning via
examples from heterogeneous tasks and embodiments, such as human videos. The
generality of GVL enables various downstream applications pertinent to
visuomotor policy learning, including dataset filtering, success detection, and
advantage-weighted regression -- all without any model training or finetuning. |
---|---|
DOI: | 10.48550/arxiv.2411.04549 |