HiTZ@Antidote: Argumentation-driven Explainable Artificial Intelligence for Digital Medicine

Providing high quality explanations for AI predictions based on machine learning is a challenging and complex task. To work well it requires, among other factors: selecting a proper level of generality/specificity of the explanation; considering assumptions about the familiarity of the explanation b...

Full description

Saved in:
Bibliographic Details
Main Authors: Agerri, Rodrigo, Alonso, Iñigo, Atutxa, Aitziber, Berrondo, Ander, Estarrona, Ainara, Garcia-Ferrero, Iker, Goenaga, Iakes, Gojenola, Koldo, Oronoz, Maite, Perez-Tejedor, Igor, Rigau, German, Yeginbergenova, Anar
Format: Journal Article
Language:English
Published: 09-06-2023
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Providing high quality explanations for AI predictions based on machine learning is a challenging and complex task. To work well it requires, among other factors: selecting a proper level of generality/specificity of the explanation; considering assumptions about the familiarity of the explanation beneficiary with the AI task under consideration; referring to specific elements that have contributed to the decision; making use of additional knowledge (e.g. expert evidence) which might not be part of the prediction process; and providing evidence supporting negative hypothesis. Finally, the system needs to formulate the explanation in a clearly interpretable, and possibly convincing, way. Given these considerations, ANTIDOTE fosters an integrated vision of explainable AI, where low-level characteristics of the deep learning process are combined with higher level schemes proper of the human argumentation capacity. ANTIDOTE will exploit cross-disciplinary competences in deep learning and argumentation to support a broader and innovative view of explainable AI, where the need for high-quality explanations for clinical cases deliberation is critical. As a first result of the project, we publish the Antidote CasiMedicos dataset to facilitate research on explainable AI in general, and argumentation in the medical domain in particular.
DOI:10.48550/arxiv.2306.06029