Predicting future AI failures from historic examples
Purpose The purpose of this paper is to explain to readers how intelligent systems can fail and how artificial intelligence (AI) safety is different from cybersecurity. The goal of cybersecurity is to reduce the number of successful attacks on the system; the goal of AI Safety is to make sure zero a...
Saved in:
Published in: | Foresight (Cambridge) Vol. 21; no. 1; pp. 138 - 152 |
---|---|
Main Author: | |
Format: | Journal Article |
Language: | English |
Published: |
Bradford
Emerald Publishing Limited
11-03-2019
Emerald Group Publishing Limited |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Purpose
The purpose of this paper is to explain to readers how intelligent systems can fail and how artificial intelligence (AI) safety is different from cybersecurity. The goal of cybersecurity is to reduce the number of successful attacks on the system; the goal of AI Safety is to make sure zero attacks succeed in bypassing the safety mechanisms. Unfortunately, such a level of performance is unachievable. Every security system will eventually fail; there is no such thing as a 100 per cent secure system.
Design/methodology/approach
AI Safety can be improved based on ideas developed by cybersecurity experts. For narrow AI Safety, failures are at the same, moderate level of criticality as in cybersecurity; however, for general AI, failures have a fundamentally different impact. A single failure of a superintelligent system may cause a catastrophic event without a chance for recovery.
Findings
In this paper, the authors present and analyze reported failures of artificially intelligent systems and extrapolate our analysis to future AIs. The authors suggest that both the frequency and the seriousness of future AI failures will steadily increase.
Originality/value
This is a first attempt to assemble a public data set of AI failures and is extremely valuable to AI Safety researchers. |
---|---|
ISSN: | 1463-6689 1465-9832 |
DOI: | 10.1108/FS-04-2018-0034 |