Personalizing the explanation extraction in Intelligent Decision Support Systems
The use of black-box models compromises the adoption of Decision Support Systems because they tipically do not allow the decision maker to understand how decisions are issued by the system. Even with the growth of the Explanable Artificial Intelligence area, little has been done to personalize the e...
Saved in:
Published in: | 2021 16th Iberian Conference on Information Systems and Technologies (CISTI) pp. 1 - 6 |
---|---|
Main Authors: | , |
Format: | Conference Proceeding |
Language: | English |
Published: |
AISTI
23-06-2021
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The use of black-box models compromises the adoption of Decision Support Systems because they tipically do not allow the decision maker to understand how decisions are issued by the system. Even with the growth of the Explanable Artificial Intelligence area, little has been done to personalize the explanations generated in the context of these systems. This article presents an approach to customize explanations of decisions using a graph to mediate inferences. Proofs of concept and analysis of simulated decisions are presented. The results suggest that the approach provided the systems with the ability to issue satisfactory explanations, customizing them for simulated user profiles as long as reasonable levels of accuracy were maintained. |
---|---|
DOI: | 10.23919/CISTI52073.2021.9476588 |