MatViX: Multimodal Information Extraction from Visually Rich Articles
Multimodal information extraction (MIE) is crucial for scientific literature, where valuable data is often spread across text, figures, and tables. In materials science, extracting structured information from research articles can accelerate the discovery of new materials. However, the multimodal na...
Saved in:
Main Authors: | , , , , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
27-10-2024
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Multimodal information extraction (MIE) is crucial for scientific literature,
where valuable data is often spread across text, figures, and tables. In
materials science, extracting structured information from research articles can
accelerate the discovery of new materials. However, the multimodal nature and
complex interconnections of scientific content present challenges for
traditional text-based methods. We introduce \textsc{MatViX}, a benchmark
consisting of $324$ full-length research articles and $1,688$ complex
structured JSON files, carefully curated by domain experts. These JSON files
are extracted from text, tables, and figures in full-length documents,
providing a comprehensive challenge for MIE. We introduce an evaluation method
to assess the accuracy of curve similarity and the alignment of hierarchical
structures. Additionally, we benchmark vision-language models (VLMs) in a
zero-shot manner, capable of processing long contexts and multimodal inputs,
and show that using a specialized model (DePlot) can improve performance in
extracting curves. Our results demonstrate significant room for improvement in
current models. Our dataset and evaluation code are
available\footnote{\url{https://matvix-bench.github.io/}}. |
---|---|
DOI: | 10.48550/arxiv.2410.20494 |