Mitigating Risk in Neural Network Classifiers
Deep Neural Network (DNN) classifiers perform remarkably well on many problems that require skills which are natural and intuitive to humans. These classifiers have been used in safety-critical systems including autonomous vehicles. For such systems to be trusted it is necessary to demonstrate that...
Saved in:
Published in: | 2022 48th Euromicro Conference on Software Engineering and Advanced Applications (SEAA) pp. 370 - 373 |
---|---|
Main Authors: | , , |
Format: | Conference Proceeding |
Language: | English |
Published: |
IEEE
01-08-2022
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Deep Neural Network (DNN) classifiers perform remarkably well on many problems that require skills which are natural and intuitive to humans. These classifiers have been used in safety-critical systems including autonomous vehicles. For such systems to be trusted it is necessary to demonstrate that the risk factors associated with neural network classification have been appropriately considered and sufficient risk mitigation has been employed. Traditional DNNs fail to explicitly consider risk during their training and verification stages, meaning that unsafe failure modes are permitted and under-reported. To address this limitation, our short paper introduces a work-in-progress approach that (i) allows the risk of misclassification between classes to be quantified, (ii) guides the training of DNN classifiers towards mitigating the risks that require treatment, and (iii) synthesises risk-aware ensembles with the aid of multi-objective genetic algorithms that seek to optimise DNN performance metrics while also mitigating risks. We show the effectiveness of our approach by using it to synthesise risk-aware neural network ensembles for the CIFAR-10 dataset. |
---|---|
DOI: | 10.1109/SEAA56994.2022.00065 |