A survey of visual analytics for Explainable Artificial Intelligence methods

Deep learning (DL) models have achieved impressive performance in various domains such as medicine, finance, and autonomous vehicle systems with advances in computing power and technologies. However, due to the black-box structure of DL models, the decisions of these learning models often need to be...

Full description

Saved in:
Bibliographic Details
Published in:Computers & graphics Vol. 102; pp. 502 - 520
Main Authors: Alicioglu, Gulsum, Sun, Bo
Format: Journal Article
Language:English
Published: Oxford Elsevier Ltd 01-02-2022
Elsevier Science Ltd
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Deep learning (DL) models have achieved impressive performance in various domains such as medicine, finance, and autonomous vehicle systems with advances in computing power and technologies. However, due to the black-box structure of DL models, the decisions of these learning models often need to be explained to end-users. Explainable Artificial Intelligence (XAI) provides explanations of black-box models to reveal the behavior and underlying decision-making mechanisms of the models through tools, techniques, and algorithms. Visualization techniques help to present model and prediction explanations in a more understandable, explainable, and interpretable way. This survey paper aims to review current trends and challenges of visual analytics in interpreting DL models by adopting XAI methods and present future research directions in this area. We reviewed literature based on two different aspects, model usage and visual approaches. We addressed several research questions based on our findings and then discussed missing points, research gaps, and potential future research directions. This survey provides guidelines to develop a better interpretation of neural networks through XAI methods in the field of visual analytics. [Display omitted] •A comprehensive survey for visual analytics, particularly those that adopted explainable artificial intelligence (XAI) methods, in interpreting Neural Networks is conducted.•We reviewed the literature based on model usage and visual approaches.•We concluded some visual approaches commonly used to support the illustration of XAI methods for various types of data and machine learning models; however, a generic approach is needed for the field.•We listed several future work including data manipulations, scalability, and bias in data representation, and generalizable real-time visualizations integrating XAI.
ISSN:0097-8493
1873-7684
DOI:10.1016/j.cag.2021.09.002