MedRG: Medical Report Grounding with Multi-modal Large Language Model
Medical Report Grounding is pivotal in identifying the most relevant regions in medical images based on a given phrase query, a critical aspect in medical image analysis and radiological diagnosis. However, prevailing visual grounding approaches necessitate the manual extraction of key phrases from...
Saved in:
Main Authors: | , , , , , , , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
10-04-2024
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Medical Report Grounding is pivotal in identifying the most relevant regions
in medical images based on a given phrase query, a critical aspect in medical
image analysis and radiological diagnosis. However, prevailing visual grounding
approaches necessitate the manual extraction of key phrases from medical
reports, imposing substantial burdens on both system efficiency and physicians.
In this paper, we introduce a novel framework, Medical Report Grounding
(MedRG), an end-to-end solution for utilizing a multi-modal Large Language
Model to predict key phrase by incorporating a unique token, BOX, into the
vocabulary to serve as an embedding for unlocking detection capabilities.
Subsequently, the vision encoder-decoder jointly decodes the hidden embedding
and the input medical image, generating the corresponding grounding box. The
experimental results validate the effectiveness of MedRG, surpassing the
performance of the existing state-of-the-art medical phrase grounding methods.
This study represents a pioneering exploration of the medical report grounding
task, marking the first-ever endeavor in this domain. |
---|---|
DOI: | 10.48550/arxiv.2404.06798 |