Login
HomePublicationsJournal Paper

Biasing Federated Learning with A New Adversarial Graph Attention Network
Ref: CISTER-TR-241101       Publication Date: 2024

Biasing Federated Learning with A New Adversarial Graph Attention Network

Ref: CISTER-TR-241101       Publication Date: 2024

Abstract:
Fairness in Federated Learning (FL) is imperative not only for the ethical utilization of technology but also for ensuring that models provide accurate, equitable, and beneficial outcomes across varied user demographics and equipment. This paper proposes a new adversarial architecture, referred to as Adversarial Graph Attention Network (AGAT), which deliberately instigates fairness attacks with an aim to bias the learning process across the FL. The proposed AGAT is developed to synthesize malicious, biasing model updates, where the minimum of Kullback-Leibler (KL) divergence between the user's model update and the global model is maximized. Due to a limited set of labeled input-output biasing data samples, a surrogate model is created, which presents the behavior of a complex malicious model update. Moreover, a graph autoencoder (GAE) is designed within the AGAT architecture, which is trained together with sub-gradient descent to reconstruct manipulatively the correlations of the model updates, and maximize the reconstruction loss while keeping the malicious, biasing model updates undetectable. The proposed AGAT attack is implemented in PyTorch, showing experimentally that AGAT successfully increases the minimum value of KL divergence of benign model updates by 60.9% and bypasses the detection of existing defense models. The source code of the AGAT attack is released on GitHub.

Authors:
Kai Li
,
Jingjing Zheng
,
Wei Ni
,
Hailong Huang
,
Pietro Lio
,
Falko Dressler
,
Ozgur B. Akan


Published in IEEE Transactions on Mobile Computing (TMC) (TMC), IEEE, Edited: Shuguang Cui.



Record Date: 11, Nov, 2024