Beyond Concept Bottleneck Models: How to Make Black Boxes Intervenable?
Recently, interpretable machine learning has re-explored concept bottleneck models (CBM). An advantage of this model class is the user's ability to intervene on predicted concept values, affecting the downstream output. In this work, we introduce a method to perform such concept-based intervent...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
24-01-2024
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Recently, interpretable machine learning has re-explored concept bottleneck
models (CBM). An advantage of this model class is the user's ability to
intervene on predicted concept values, affecting the downstream output. In this
work, we introduce a method to perform such concept-based interventions on
pretrained neural networks, which are not interpretable by design, only given a
small validation set with concept labels. Furthermore, we formalise the notion
of intervenability as a measure of the effectiveness of concept-based
interventions and leverage this definition to fine-tune black boxes.
Empirically, we explore the intervenability of black-box classifiers on
synthetic tabular and natural image benchmarks. We focus on backbone
architectures of varying complexity, from simple, fully connected neural nets
to Stable Diffusion. We demonstrate that the proposed fine-tuning improves
intervention effectiveness and often yields better-calibrated predictions. To
showcase the practical utility of our techniques, we apply them to deep chest
X-ray classifiers and show that fine-tuned black boxes are more intervenable
than CBMs. Lastly, we establish that our methods are still effective under
vision-language-model-based concept annotations, alleviating the need for a
human-annotated validation set. |
---|---|
DOI: | 10.48550/arxiv.2401.13544 |