Privacy-preserving Federated Brain Tumour Segmentation
Due to medical data privacy regulations, it is often infeasible to collect and share patient data in a centralised data lake. This poses challenges for training machine learning algorithms, such as deep convolutional networks, which often require large numbers of diverse training examples. Federated...
Saved in:
Main Authors: | , , , , , , , , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
02-10-2019
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Due to medical data privacy regulations, it is often infeasible to collect
and share patient data in a centralised data lake. This poses challenges for
training machine learning algorithms, such as deep convolutional networks,
which often require large numbers of diverse training examples. Federated
learning sidesteps this difficulty by bringing code to the patient data owners
and only sharing intermediate model training updates among them. Although a
high-accuracy model could be achieved by appropriately aggregating these model
updates, the model shared could indirectly leak the local training examples. In
this paper, we investigate the feasibility of applying differential-privacy
techniques to protect the patient data in a federated learning setup. We
implement and evaluate practical federated learning systems for brain tumour
segmentation on the BraTS dataset. The experimental results show that there is
a trade-off between model performance and privacy protection costs. |
---|---|
DOI: | 10.48550/arxiv.1910.00962 |