GRASP: A novel benchmark for evaluating language GRounding And Situated Physics understanding in multimodal language models
This paper presents GRASP, a novel benchmark to evaluate the language grounding and physical understanding capabilities of video-based multimodal large language models (LLMs). This evaluation is accomplished via a two-tier approach leveraging Unity simulations. The first level tests for language gro...
Saved in:
Main Authors: | , , , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
15-11-2023
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | This paper presents GRASP, a novel benchmark to evaluate the language
grounding and physical understanding capabilities of video-based multimodal
large language models (LLMs). This evaluation is accomplished via a two-tier
approach leveraging Unity simulations. The first level tests for language
grounding by assessing a model's ability to relate simple textual descriptions
with visual information. The second level evaluates the model's understanding
of "Intuitive Physics" principles, such as object permanence and continuity. In
addition to releasing the benchmark, we use it to evaluate several
state-of-the-art multimodal LLMs. Our evaluation reveals significant
shortcomings in the language grounding and intuitive physics capabilities of
these models. Although they exhibit at least some grounding capabilities,
particularly for colors and shapes, these capabilities depend heavily on the
prompting strategy. At the same time, all models perform below or at the chance
level of 50% in the Intuitive Physics tests, while human subjects are on
average 80% correct. These identified limitations underline the importance of
using benchmarks like GRASP to monitor the progress of future models in
developing these competencies. |
---|---|
DOI: | 10.48550/arxiv.2311.09048 |