Login

Exploring Visual Explanations for Defending Federated Learning against Poisoning Attacks
Ref: CISTER-TR-240902       Publication Date: 2024

Exploring Visual Explanations for Defending Federated Learning against Poisoning Attacks

Ref: CISTER-TR-240902       Publication Date: 2024

Abstract:
This paper proposes a new visual explanation-based defense mechanism, namely, FedCAMAE, against model poisoning attacks on federated learning (FL), which integrates Layer Class Activation Mapping (LayerCAM) and autoencoder to offer a scientifically more powerful detection capability compared to existing Euclidean distance-based or machine learning-based approaches. Specially, FedCAMAE generates a fine-grained heat map assisted by LayerCAM for each uploaded local model update, transforming each local model update into a lower-dimensional, visual representation. To accentuate the hidden features of the heat maps, autoencoder is seamlessly embedded into the proposed FedCAMAE, which can refine the the heat maps and enhance their distinguishability, thereby increasing the success rate of identifying anomalous heat maps and malicious local models. We test ResNet-50 and REGNETY-800MF deep learning models with SVHN and CIFAR-100 datasets under Non-Independent and Identically Distributed (Non-IID) setting, respectively. The results demonstrate that FedCAMAE offers superior test accuracy of FL global model compared to the state-of-the-art methods. Our code is available at: https://github.com/jjzgeeks/LayerCAM-AE

Authors:
Jingjing Zheng
,
Kai Li
,
Xin Yuan
,
Wei Ni
,
Eduardo Tovar


Poster presented in The 30th Annual International Conference on Mobile Computing and Networking (MobiCom 2024).
Washington, D.C., U.S.A..



Record Date: 9, Sep, 2024