Self-explanatory error checking capability for classifier-based Decision Support Systems
The eXplainable Artificial Intelligence field emerged to solve, albeit partially, the need to explain opaque intelligent models. However, even when intelligent decision support systems employ explainability techniques, it is still up to the decision maker to inspect those explanations and to perceiv...
Saved in:
Published in: | 2022 IEEE Latin American Conference on Computational Intelligence (LA-CCI) pp. 1 - 6 |
---|---|
Main Authors: | , |
Format: | Conference Proceeding |
Language: | English |
Published: |
IEEE
23-11-2022
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The eXplainable Artificial Intelligence field emerged to solve, albeit partially, the need to explain opaque intelligent models. However, even when intelligent decision support systems employ explainability techniques, it is still up to the decision maker to inspect those explanations and to perceive any problems contained in them. This work proposes an approach to imbue classifier-based Decision Support Systems with the capability of self-detecting and explaining inference errors. The hypothesis here is that fostering such self-awareness might improve system use, aiding the Decision Maker in perceiving problems with proposed solutions and thus being able to select better choices. For the studied datasets, experimental results shown that the approach was effective in detecting over 60% of decision inference errors while improving the accuracy in 20% for the best cases, when correcting all detected errors. |
---|---|
DOI: | 10.1109/LA-CCI54402.2022.9981853 |