FAT: Federated Adversarial Training
Federated learning (FL) is one of the most important paradigms addressing privacy and data governance issues in machine learning (ML). Adversarial training has emerged, so far, as the most promising approach against evasion threats on ML models. In this paper, we take the first known steps towards f...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
03-12-2020
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Federated learning (FL) is one of the most important paradigms addressing
privacy and data governance issues in machine learning (ML). Adversarial
training has emerged, so far, as the most promising approach against evasion
threats on ML models. In this paper, we take the first known steps towards
federated adversarial training (FAT) combining both methods to reduce the
threat of evasion during inference while preserving the data privacy during
training. We investigate the effectiveness of the FAT protocol for idealised
federated settings using MNIST, Fashion-MNIST, and CIFAR10, and provide first
insights on stabilising the training on the LEAF benchmark dataset which
specifically emulates a federated learning environment. We identify challenges
with this natural extension of adversarial training with regards to achieved
adversarial robustness and further examine the idealised settings in the
presence of clients undermining model convergence. We find that Trimmed Mean
and Bulyan defences can be compromised and we were able to subvert Krum with a
novel distillation based attack which presents an apparently "robust" model to
the defender while in fact the model fails to provide robustness against simple
attack modifications. |
---|---|
DOI: | 10.48550/arxiv.2012.01791 |