Truthful Incentive Mechanism Design via Internalizing Externalities and LP Relaxation for Vertical Federated Learning

Although vertical federated learning (VFL) has become a new paradigm of distributed machine learning for emerging multiparty joint modeling applications, how to effectively incentivize self-conscious clients to actively and reliably contribute to collaborative learning in VFL has become a critical i...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on computational social systems Vol. 10; no. 6; pp. 1 - 15
Main Authors: Lu, Jianfeng, Pan, Bangqi, Seid, Abegaz Mohammed, Li, Bing, Hu, Gangqiang, Wan, Shaohua
Format: Journal Article
Language:English
Published: Piscataway IEEE 01-12-2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Although vertical federated learning (VFL) has become a new paradigm of distributed machine learning for emerging multiparty joint modeling applications, how to effectively incentivize self-conscious clients to actively and reliably contribute to collaborative learning in VFL has become a critical issue. Existing efforts are inadequate to address this issue since the training sample size needs to be unified before model training in VFL. To this end, selfish clients should unconditionally and honestly declare their private information, such as model training costs and benefits. However, such an assumption is unrealistic. In this article, we develop the first Truthful incEntive mechAnism for VFL, <inline-formula> <tex-math notation="LaTeX">\mathbb{TEA}</tex-math> </inline-formula>, to handle both information self-disclosure and social utility maximization. Specifically, we design a transfer payment rule via internalizing externalities, which bundles the clients' utilities with the social utility, making truthful reporting by clients be a Nash equilibrium. Theoretically, we prove that <inline-formula> <tex-math notation="LaTeX">\mathbb{TEA}</tex-math> </inline-formula> can achieve truthfulness and social utility maximization, as well as budget balance (BB) or individual rationality (IR). On this basis, we further design a sample size decision rule via linear programming (LP) relaxation to meet the requirements of different scenarios. Finally, extensive experiments on synthetic and real-world datasets validate the theoretical properties of <inline-formula> <tex-math notation="LaTeX">\mathbb{TEA}</tex-math> </inline-formula> and demonstrate its superiority compared with the state-of-the-art.
ISSN:2329-924X
2329-924X
2373-7476
DOI:10.1109/TCSS.2022.3227270