Uncertainty quantification for deep neural networks: An empirical comparison and usage guidelines
Summary Deep neural networks (DNN) are increasingly used as components of larger software systems that need to process complex data, such as images, written texts, audio/video signals. DNN predictions cannot be assumed to be always correct for several reasons, amongst which the huge input space that...
Saved in:
Published in: | Software testing, verification & reliability Vol. 33; no. 6 |
---|---|
Main Authors: | , |
Format: | Journal Article |
Language: | English |
Published: |
Chichester
Wiley Subscription Services, Inc
01-09-2023
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Summary
Deep neural networks (DNN) are increasingly used as components of larger software systems that need to process complex data, such as images, written texts, audio/video signals. DNN predictions cannot be assumed to be always correct for several reasons, amongst which the huge input space that is dealt with, the ambiguity of some inputs data, as well as the intrinsic properties of learning algorithms, which can provide only statistical warranties. Hence, developers have to cope with some residual error probability. An architectural pattern commonly adopted to manage failure prone components is the supervisor, an additional component that can estimate the reliability of the predictions made by untrusted (e.g., DNN) components and can activate an automated healing procedure when these are likely to fail, ensuring that the deep learning‐based system (DLS) does not cause damages, despite its main functionality being suspended.
In this paper, we consider DLS that implement a supervisor by means of uncertainty estimation. After overviewing the main approaches to uncertainty estimation and discussing their pros and cons, we motivate the need for a specific empirical assessment method that can deal with the experimental setting in which supervisors are used, where accuracy of the DNN matters only as long as the supervisor lets the DLS continue to operate. Then we present a large empirical study conducted to compare the alternative approaches to uncertainty estimation. We distilled a set of guidelines for developers that are useful to incorporate a supervisor based on uncertainty monitoring into a DLS.
We perform an empirical study to compare uncertainty‐based deep neural network supervisors. Our finding shows that there is no dominant technique, that is, no technique always outperforms all other ones. Subsequentially, we propose specific usage guidelines to help practitioners choose and configure an uncertainty quantification technique with the goal to build a fail‐safe system. |
---|---|
ISSN: | 0960-0833 1099-1689 |
DOI: | 10.1002/stvr.1840 |