The Who in XAI: How AI Background Shapes Perceptions of AI Explanations
ACM CHI 2024 Explainability of AI systems is critical for users to take informed actions. Understanding "who" opens the black-box of AI is just as important as opening it. We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive differen...
Saved in:
Main Authors: | , , , , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
05-03-2024
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | ACM CHI 2024 Explainability of AI systems is critical for users to take informed actions.
Understanding "who" opens the black-box of AI is just as important as opening
it. We conduct a mixed-methods study of how two different groups--people with
and without AI background--perceive different types of AI explanations.
Quantitatively, we share user perceptions along five dimensions. Qualitatively,
we describe how AI background can influence interpretations, elucidating the
differences through lenses of appropriation and cognitive heuristics. We find
that (1) both groups showed unwarranted faith in numbers for different reasons
and (2) each group found value in different explanations beyond their intended
design. Carrying critical implications for the field of XAI, our findings
showcase how AI generated explanations can have negative consequences despite
best intentions and how that could lead to harmful manipulation of trust. We
propose design interventions to mitigate them. |
---|---|
DOI: | 10.48550/arxiv.2107.13509 |