Provable Mutual Benefits from Federated Learning in Privacy-Sensitive Domains
Cross-silo federated learning (FL) allows data owners to train accurate machine learning models by benefiting from each others private datasets. Unfortunately, the model accuracy benefits of collaboration are often undermined by privacy defenses. Therefore, to incentivize client participation in pri...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
11-03-2024
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Cross-silo federated learning (FL) allows data owners to train accurate
machine learning models by benefiting from each others private datasets.
Unfortunately, the model accuracy benefits of collaboration are often
undermined by privacy defenses. Therefore, to incentivize client participation
in privacy-sensitive domains, a FL protocol should strike a delicate balance
between privacy guarantees and end-model accuracy. In this paper, we study the
question of when and how a server could design a FL protocol provably
beneficial for all participants. First, we provide necessary and sufficient
conditions for the existence of mutually beneficial protocols in the context of
mean estimation and convex stochastic optimization. We also derive protocols
that maximize the total clients' utility, given symmetric privacy preferences.
Finally, we design protocols maximizing end-model accuracy and demonstrate
their benefits in synthetic experiments. |
---|---|
DOI: | 10.48550/arxiv.2403.06672 |