A Concept-Based Explainability Framework for Large Multimodal Models
Large multimodal models (LMMs) combine unimodal encoders and large language models (LLMs) to perform multimodal tasks. Despite recent advancements towards the interpretability of these models, understanding internal representations of LMMs remains largely a mystery. In this paper, we present a novel...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
12-06-2024
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Large multimodal models (LMMs) combine unimodal encoders and large language
models (LLMs) to perform multimodal tasks. Despite recent advancements towards
the interpretability of these models, understanding internal representations of
LMMs remains largely a mystery. In this paper, we present a novel framework for
the interpretation of LMMs. We propose a dictionary learning based approach,
applied to the representation of tokens. The elements of the learned dictionary
correspond to our proposed concepts. We show that these concepts are well
semantically grounded in both vision and text. Thus we refer to these as
``multi-modal concepts''. We qualitatively and quantitatively evaluate the
results of the learnt concepts. We show that the extracted multimodal concepts
are useful to interpret representations of test samples. Finally, we evaluate
the disentanglement between different concepts and the quality of grounding
concepts visually and textually. Our implementation is publicly available. |
---|---|
DOI: | 10.48550/arxiv.2406.08074 |