Uncovering Deceptive Tendencies in Language Models: A Simulated Company AI Assistant
We study the tendency of AI systems to deceive by constructing a realistic simulation setting of a company AI assistant. The simulated company employees provide tasks for the assistant to complete, these tasks spanning writing assistance, information retrieval and programming. We then introduce situ...
Saved in:
Main Authors: | , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
25-04-2024
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | We study the tendency of AI systems to deceive by constructing a realistic
simulation setting of a company AI assistant. The simulated company employees
provide tasks for the assistant to complete, these tasks spanning writing
assistance, information retrieval and programming. We then introduce situations
where the model might be inclined to behave deceptively, while taking care to
not instruct or otherwise pressure the model to do so. Across different
scenarios, we find that Claude 3 Opus
1) complies with a task of mass-generating comments to influence public
perception of the company, later deceiving humans about it having done so,
2) lies to auditors when asked questions, and
3) strategically pretends to be less capable than it is during capability
evaluations.
Our work demonstrates that even models trained to be helpful, harmless and
honest sometimes behave deceptively in realistic scenarios, without notable
external pressure to do so. |
---|---|
DOI: | 10.48550/arxiv.2405.01576 |