Temporal Evolution of Trust in Artificial Intelligence-Supported Decision-Making
AI is set to take over some tasks within the decision space that have traditionally been reserved for humans. In return, human decision-makers interacting with AI systems entails rationalization of AI outputs by humans, who may have difficulty forming trust around such AI-generated information. Alth...
Saved in:
Published in: | Proceedings of the Human Factors and Ergonomics Society Annual Meeting Vol. 67; no. 1; pp. 1936 - 1941 |
---|---|
Main Authors: | , , |
Format: | Journal Article |
Language: | English |
Published: |
Los Angeles, CA
SAGE Publications
01-09-2023
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | AI is set to take over some tasks within the decision space that have traditionally been reserved for humans. In return, human decision-makers interacting with AI systems entails rationalization of AI outputs by humans, who may have difficulty forming trust around such AI-generated information. Although a variety of analytical methods have provided some insights into human trust in AI, a more comprehensive understanding of trust may be augmented by generative theories that capture the temporal evolution of trust. Therefore, an open system modeling approach, representing trust as a function of time with a single probability distribution, can potentially improve modeling human trust in an AI system. Results of this study could improve machine behaviors that may help steer a human’s preference to a more Bayesian optimal rationality which is useful in stressful decision-making scenarios. |
---|---|
ISSN: | 1071-1813 2169-5067 |
DOI: | 10.1177/21695067231193672 |