VideoScore: Building Automatic Metrics to Simulate Fine-grained Human Feedback for Video Generation
The recent years have witnessed great advances in video generation. However, the development of automatic video metrics is lagging significantly behind. None of the existing metric is able to provide reliable scores over generated videos. The main barrier is the lack of large-scale human-annotated d...
Saved in:
Main Authors: | , , , , , , , , , , , , , , , , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
21-06-2024
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The recent years have witnessed great advances in video generation. However,
the development of automatic video metrics is lagging significantly behind.
None of the existing metric is able to provide reliable scores over generated
videos. The main barrier is the lack of large-scale human-annotated dataset. In
this paper, we release VideoFeedback, the first large-scale dataset containing
human-provided multi-aspect score over 37.6K synthesized videos from 11
existing video generative models. We train VideoScore (initialized from Mantis)
based on VideoFeedback to enable automatic video quality assessment.
Experiments show that the Spearman correlation between VideoScore and humans
can reach 77.1 on VideoFeedback-test, beating the prior best metrics by about
50 points. Further result on other held-out EvalCrafter, GenAI-Bench, and
VBench show that VideoScore has consistently much higher correlation with human
judges than other metrics. Due to these results, we believe VideoScore can
serve as a great proxy for human raters to (1) rate different video models to
track progress (2) simulate fine-grained human feedback in Reinforcement
Learning with Human Feedback (RLHF) to improve current video generation models. |
---|---|
DOI: | 10.48550/arxiv.2406.15252 |