Federated Unlearning for On-Device Recommendation
The increasing data privacy concerns in recommendation systems have made federated recommendations (FedRecs) attract more and more attention. Existing FedRecs mainly focus on how to effectively and securely learn personal interests and preferences from their on-device interaction data. Still, none o...
Saved in:
Main Authors: | , , , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
19-10-2022
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The increasing data privacy concerns in recommendation systems have made
federated recommendations (FedRecs) attract more and more attention. Existing
FedRecs mainly focus on how to effectively and securely learn personal
interests and preferences from their on-device interaction data. Still, none of
them considers how to efficiently erase a user's contribution to the federated
training process. We argue that such a dual setting is necessary. First, from
the privacy protection perspective, ``the right to be forgotten'' requires that
users have the right to withdraw their data contributions. Without the
reversible ability, FedRecs risk breaking data protection regulations. On the
other hand, enabling a FedRec to forget specific users can improve its
robustness and resistance to malicious clients' attacks. To support user
unlearning in FedRecs, we propose an efficient unlearning method FRU (Federated
Recommendation Unlearning), inspired by the log-based rollback mechanism of
transactions in database management systems. It removes a user's contribution
by rolling back and calibrating the historical parameter updates and then uses
these updates to speed up federated recommender reconstruction. However,
storing all historical parameter updates on resource-constrained personal
devices is challenging and even infeasible. In light of this challenge, we
propose a small-sized negative sampling method to reduce the number of item
embedding updates and an importance-based update selection mechanism to store
only important model updates. To evaluate the effectiveness of FRU, we propose
an attack method to disturb FedRecs via a group of compromised users and use
FRU to recover recommenders by eliminating these users' influence. Finally, we
conduct experiments on two real-world recommendation datasets with two widely
used FedRecs to show the efficiency and effectiveness of our proposed
approaches. |
---|---|
DOI: | 10.48550/arxiv.2210.10958 |