Understanding Affective Trust in AI: the Effects of Perceived Benevolence
The primary objective of this research was to gain understanding of affective trust in AI (how comfortable individuals feel with various AI applications). This dissertation tested a model for affective trust in AI grounded in interpersonal trust theories with a focus on the effects of perceived bene...
Saved in:
Main Author: | |
---|---|
Format: | Dissertation |
Language: | English |
Published: |
ProQuest Dissertations & Theses
01-01-2022
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The primary objective of this research was to gain understanding of affective trust in AI (how comfortable individuals feel with various AI applications). This dissertation tested a model for affective trust in AI grounded in interpersonal trust theories with a focus on the effects of perceived benevolence of AI—an overlooked factor in AI trust research. In Study 1a, online survey participants evaluated 20 AI applications with single-item measures. In Study 1b, four AI applications were evaluated with multi-item measures. Perceived benevolence was significantly, positively associated with affective trust over and above cognitive trust and familiarity in 21 of 24 AI tests. Confirmatory factor analysis suggested four factors, supporting the theory that cognitive trust and affective trust in AI are distinct factors. The secondary objective was to test the utility of manipulating perceived benevolence of AI. In Study 2, online survey participants were randomly assigned to one of two groups with 10 AI applications described as “augmented intelligence” that “collaborates with” a specific or exact same AI described as “artificial intelligence.” The augmentation manipulation did not matter; there were no significant direct or indirect effects to benevolence or affective trust. These results imply that “Augmented Intelligence” positioning has no significant effect on affective trust, counter to practitioners’ beliefs. In Study 3, online survey participants were randomly assigned to one of two groups—one that received benevolence messaging (a message informing the participant that the AI was intended for human welfare) for five AI applications and the other did not.Benevolence messaging was also tested to see if it moderated contexts expected to diminish affective trust (likelihood of worker replacement and likelihood of death from error). Benevolence was not influenced by the manipulation. Surprisingly, likelihood of worker replacement had no significant association with affective trust, and likelihood of death from error had only one significant association. People may be more ambivalent about these contexts than previously thought. Understanding affective trust in AI was expanded by identifying the importance of perceived benevolence. Until benevolence messaging can boost perceptions of benevolence, the success of that strategy remains unknown. |
---|---|
ISBN: | 9798379568993 |