Perfectly Parallel Fairness Certification of Neural Networks
Recently, there is growing concern that machine-learning models, which currently assist or even automate decision making, reproduce, and in the worst case reinforce, bias of the training data. The development of tools and techniques for certifying fairness of these models or describing their biased...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
05-12-2019
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Recently, there is growing concern that machine-learning models, which
currently assist or even automate decision making, reproduce, and in the worst
case reinforce, bias of the training data. The development of tools and
techniques for certifying fairness of these models or describing their biased
behavior is, therefore, critical. In this paper, we propose a perfectly
parallel static analysis for certifying causal fairness of feed-forward neural
networks used for classification of tabular data. When certification succeeds,
our approach provides definite guarantees, otherwise, it describes and
quantifies the biased behavior. We design the analysis to be sound, in practice
also exact, and configurable in terms of scalability and precision, thereby
enabling pay-as-you-go certification. We implement our approach in an
open-source tool and demonstrate its effectiveness on models trained with
popular datasets. |
---|---|
DOI: | 10.48550/arxiv.1912.02499 |