DeepMiner: Discovering Interpretable Representations for Mammogram Classification and Explanation
We propose DeepMiner, a framework to discover interpretable representations in deep neural networks and to build explanations for medical predictions. By probing convolutional neural networks (CNNs) trained to classify cancer in mammograms, we show that many individual units in the final convolution...
Saved in:
Main Authors: | , , , , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
18-08-2021
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | We propose DeepMiner, a framework to discover interpretable representations
in deep neural networks and to build explanations for medical predictions. By
probing convolutional neural networks (CNNs) trained to classify cancer in
mammograms, we show that many individual units in the final convolutional layer
of a CNN respond strongly to diseased tissue concepts specified by the BI-RADS
lexicon. After expert annotation of the interpretable units, our proposed
method is able to generate explanations for CNN mammogram classification that
are consistent with ground truth radiology reports on the Digital Database for
Screening Mammography. We show that DeepMiner not only enables better
understanding of the nuances of CNN classification decisions but also possibly
discovers new visual knowledge relevant to medical diagnosis. |
---|---|
DOI: | 10.48550/arxiv.1805.12323 |