Minimal Explanations in ReLU-based Neural Network via Abduction

Abduction is a well-known reasoning approach to compute plausible explanations for an observation. It has recently been employed to explain the machine learning prediction of samples from a data set by generating subset-minimal or cardinality-minimal explanations with respect to features. In this pa...

Full description

Saved in:
Bibliographic Details
Published in:2020 International Conference on Advanced Computer Science and Information Systems (ICACSIS) pp. 1 - 6
Main Authors: Abraham, Ariel Miki, Saptawijaya, Ari, Damanik, Raja O. P.
Format: Conference Proceeding
Language:English
Published: IEEE 17-10-2020
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abduction is a well-known reasoning approach to compute plausible explanations for an observation. It has recently been employed to explain the machine learning prediction of samples from a data set by generating subset-minimal or cardinality-minimal explanations with respect to features. In this paper, we study some complexity properties of such minimal explanations in explaining predictions of neural networks. This paper also extends existing works by proposing a randomized subset-minimal procedure as a strategy to compute subset-minimal explanations. The experiment results on a number of benchmarks validate that the resulting explanations are generally smaller than subset-minimal ones. On the other hand, this strategy is not as expensive as computing cardinality-minimal explanations. It thus serves as a trade-off between the existing strategies of cardinality-and subset-minimal explanations.
DOI:10.1109/ICACSIS51025.2020.9263197