VidComposition: Can MLLMs Analyze Compositions in Compiled Videos?
The advancement of Multimodal Large Language Models (MLLMs) has enabled significant progress in multimodal understanding, expanding their capacity to analyze video content. However, existing evaluation benchmarks for MLLMs primarily focus on abstract video comprehension, lacking a detailed assessmen...
Saved in:
Main Authors: | , , , , , , , , , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
17-11-2024
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The advancement of Multimodal Large Language Models (MLLMs) has enabled
significant progress in multimodal understanding, expanding their capacity to
analyze video content. However, existing evaluation benchmarks for MLLMs
primarily focus on abstract video comprehension, lacking a detailed assessment
of their ability to understand video compositions, the nuanced interpretation
of how visual elements combine and interact within highly compiled video
contexts. We introduce VidComposition, a new benchmark specifically designed to
evaluate the video composition understanding capabilities of MLLMs using
carefully curated compiled videos and cinematic-level annotations.
VidComposition includes 982 videos with 1706 multiple-choice questions,
covering various compositional aspects such as camera movement, angle, shot
size, narrative structure, character actions and emotions, etc. Our
comprehensive evaluation of 33 open-source and proprietary MLLMs reveals a
significant performance gap between human and model capabilities. This
highlights the limitations of current MLLMs in understanding complex, compiled
video compositions and offers insights into areas for further improvement. The
leaderboard and evaluation code are available at
https://yunlong10.github.io/VidComposition/. |
---|---|
DOI: | 10.48550/arxiv.2411.10979 |