"I'm Not Sure, But...": Examining the Impact of Large Language Models' Uncertainty Expression on User Reliance and Trust
Widely deployed large language models (LLMs) can produce convincing yet incorrect outputs, potentially misleading users who may rely on them as if they were correct. To reduce such overreliance, there have been calls for LLMs to communicate their uncertainty to end users. However, there has been lit...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
15-05-2024
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Widely deployed large language models (LLMs) can produce convincing yet
incorrect outputs, potentially misleading users who may rely on them as if they
were correct. To reduce such overreliance, there have been calls for LLMs to
communicate their uncertainty to end users. However, there has been little
empirical work examining how users perceive and act upon LLMs' expressions of
uncertainty. We explore this question through a large-scale, pre-registered,
human-subject experiment (N=404) in which participants answer medical questions
with or without access to responses from a fictional LLM-infused search engine.
Using both behavioral and self-reported measures, we examine how different
natural language expressions of uncertainty impact participants' reliance,
trust, and overall task performance. We find that first-person expressions
(e.g., "I'm not sure, but...") decrease participants' confidence in the system
and tendency to agree with the system's answers, while increasing participants'
accuracy. An exploratory analysis suggests that this increase can be attributed
to reduced (but not fully eliminated) overreliance on incorrect answers. While
we observe similar effects for uncertainty expressed from a general perspective
(e.g., "It's not clear, but..."), these effects are weaker and not
statistically significant. Our findings suggest that using natural language
expressions of uncertainty may be an effective approach for reducing
overreliance on LLMs, but that the precise language used matters. This
highlights the importance of user testing before deploying LLMs at scale. |
---|---|
DOI: | 10.48550/arxiv.2405.00623 |