Demystifying and Mitigating Bias for Node Representation Learning

Node representation learning has attracted increasing attention due to its efficacy for various applications on graphs. However, fairness is a largely under-explored territory within the field, although it is shown that the use of graph structure in learning amplifies bias. To this end, this work th...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transaction on neural networks and learning systems Vol. 35; no. 9; pp. 12899 - 12912
Main Authors: Kose, O. Deniz, Shen, Yanning
Format: Journal Article
Language:English
Published: United States IEEE 01-09-2024
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Node representation learning has attracted increasing attention due to its efficacy for various applications on graphs. However, fairness is a largely under-explored territory within the field, although it is shown that the use of graph structure in learning amplifies bias. To this end, this work theoretically explains the sources of bias in node representations obtained via graph neural networks (GNNs). It is revealed that both nodal features and graph structure lead to bias in the obtained representations. Building upon the analysis, fairness-aware data augmentation frameworks are developed to reduce the intrinsic bias. Our theoretical analysis and proposed schemes can be readily employed in understanding and mitigating bias for various GNN-based learning mechanisms. Extensive experiments on node classification and link prediction over multiple real networks are carried out, and it is shown that the proposed augmentation strategies can improve fairness while providing comparable utility to state-of-the-art methods.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:2162-237X
2162-2388
2162-2388
DOI:10.1109/TNNLS.2023.3265370