Learning to Explain with Complemental Examples
This paper addresses the generation of explanations with visual examples. Given an input sample, we build a system that not only classifies it to a specific category, but also outputs linguistic explanations and a set of visual examples that render the decision interpretable. Focusing especially on...
Saved in:
Main Authors: | , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
04-12-2018
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | This paper addresses the generation of explanations with visual examples.
Given an input sample, we build a system that not only classifies it to a
specific category, but also outputs linguistic explanations and a set of visual
examples that render the decision interpretable. Focusing especially on the
complementarity of the multimodal information, i.e., linguistic and visual
examples, we attempt to achieve it by maximizing the interaction information,
which provides a natural definition of complementarity from an information
theoretical viewpoint. We propose a novel framework to generate complemental
explanations, on which the joint distribution of the variables to explain, and
those to be explained is parameterized by three different neural networks:
predictor, linguistic explainer, and example selector. Explanation models are
trained collaboratively to maximize the interaction information to ensure the
generated explanation are complemental to each other for the target. The
results of experiments conducted on several datasets demonstrate the
effectiveness of the proposed method. |
---|---|
DOI: | 10.48550/arxiv.1812.01280 |