Trustworthiness Perceptions of Machine Learning Algorithms: The Influence of Confidence Intervals

Insufficient research has been conducted to investigate the impact of machine learning models on end-users' trust. This study aims to bridge the gap and examine differences in psychological perceptions of trust between two machine learning models. Participants (N = 130) were recruited online an...

Full description

Saved in:
Bibliographic Details
Published in:2024 IEEE 4th International Conference on Human-Machine Systems (ICHMS) pp. 1 - 6
Main Authors: Alarcon, Gene M., Jessup, Sarah A., Meyers, Scott K., Johnson, Dexter, Bennette, Walter D.
Format: Conference Proceeding
Language:English
Published: IEEE 15-05-2024
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Insufficient research has been conducted to investigate the impact of machine learning models on end-users' trust. This study aims to bridge the gap and examine differences in psychological perceptions of trust between two machine learning models. Participants (N = 130) were recruited online and completed an image binning monitoring task with either an uncalibrated classification (UC) model or a calibrated classification (CC) model that provided confidence intervals about their decisions. The UC model was highly confident, regardless of accuracy, whereas the CC model was more calibrated between accuracy and confidence. Results revealed participants performed better on the task in the CC condition. Additionally, performance perceptions, purpose perceptions, and reliance intentions increased over time in the CC condition. However, there were no differences in process perceptions between conditions. Calibrated confidence intervals displayed by CC models have shown to be an effective means of increasing transparency and enhancing our understanding of trust in machines.
DOI:10.1109/ICHMS59971.2024.10555722