Login

Detecting Poisoning Attacks on Federated Learning Using Gradient-Weighted Class Activation Mapping
Ref: CISTER-TR-240302       Publication Date: 13 to 17, May, 2024

Detecting Poisoning Attacks on Federated Learning Using Gradient-Weighted Class Activation Mapping

Ref: CISTER-TR-240302       Publication Date: 13 to 17, May, 2024

Abstract:
This paper proposes a new defense mechanism, namely, GCAMA, against model poisoning attacks on Federated learning (FL), which integrates Gradient-weighted Class Activation Mapping (GradCAM) and Autoencoder to offer a scientifically more powerful detection capability compared to existing Euclidean distance-based approaches. Particularly, GCAMA generates a heat map for each uploaded local model update, transforming each local model update into a lower-dimensional, visual representation, thereby accentuating the hidden features of the heat maps and increasing the success rate of identifying anomalous heat maps and malicious local models. We test ResNet-18 and MobileNetV3-Large deep learning models with CIFAR-10 and GTSRB datasets under Non-Independent and Identically Distributed (Non-IID) setting, respectively. The results demonstrate that GCAMA offers superior test accuracy of FL global model compared to the state-of-the-art methods. Our code is available at: https://github.com/jjzgeeks/GradCAM-AE

Authors:
Jingjing Zheng
,
Kai Li
,
Xin Yuan
,
Wei Ni
,
Eduardo Tovar


The Web Conference 2024 (TheWebConf (WWW 2024) 2024).
Singapore, Singapore.



Record Date: 5, Mar, 2024