MulDoor: A Multi-target Backdoor Attack Against Federated Learning System
Document Type
Conference Proceeding
Publication Date
2024
Abstract
In recent years, with the development of wireless communication networks, federated learning (FL) has been widely deployed in distributed scenarios as a privacy-preserving machine learning paradigm. Due to its inherent features, FL shows vulnerability to backdoor attacks. In a backdoor attack, an adversary manipulates the global model's output by compromising the model of one or multiple participants. Existing backdoor attacks are constrained to outputting a single specified target label during the inference phase, limiting the adversary's flexibility to alter the model's output when different target labels are required. In this paper, we study the multi-target attack scenario within the federated learning context, where the adversary aims to manipulate the global model to output various specified labels by inserting different types of triggers. To effectively insert multiple backdoors simultaneously without reducing the attack's effectiveness, we propose MulDoor, a novel multi-target backdoor attack scheme. MulDoor incorporates the concept of supervised contrastive learning to learn the discrepancies among different types of triggers and mitigate interference between them. The experimental results demonstrate that MulDoor achieves better attack effectiveness compared to existing backdoor attacks in a multi-target backdoor attack setting. © 2025 Elsevier B.V., All rights reserved.
Recommended Citation
Li, Xuan; Wu, Longfei; Guan, Zhitao; Du, Xiaojiang James; Aitsaadi, Nadjib; and Guizani, Mohsen Mokhtar, "MulDoor: A Multi-target Backdoor Attack Against Federated Learning System" (2024). College of Health, Science, and Technology. 1152.
https://digitalcommons.uncfsu.edu/college_health_science_technology/1152