Separation of Powers in Federated Learning
Federated Learning (FL) enables collaborative training among mutually distrusting parties. Model updates, rather than training data, are concentrated and fused in a central aggregation server. A key security challenge in FL is that an untrustworthy or compromised aggregation process might lead to un...
Saved in:
Main Authors: | , , , , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
19-05-2021
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Federated Learning (FL) enables collaborative training among mutually
distrusting parties. Model updates, rather than training data, are concentrated
and fused in a central aggregation server. A key security challenge in FL is
that an untrustworthy or compromised aggregation process might lead to
unforeseeable information leakage. This challenge is especially acute due to
recently demonstrated attacks that have reconstructed large fractions of
training data from ostensibly "sanitized" model updates.
In this paper, we introduce TRUDA, a new cross-silo FL system, employing a
trustworthy and decentralized aggregation architecture to break down
information concentration with regard to a single aggregator. Based on the
unique computational properties of model-fusion algorithms, all exchanged model
updates in TRUDA are disassembled at the parameter-granularity and re-stitched
to random partitions designated for multiple TEE-protected aggregators. Thus,
each aggregator only has a fragmentary and shuffled view of model updates and
is oblivious to the model architecture. Our new security mechanisms can
fundamentally mitigate training reconstruction attacks, while still preserving
the final accuracy of trained models and keeping performance overheads low. |
---|---|
DOI: | 10.48550/arxiv.2105.09400 |