In this study, we explore the concept of Multi-Trigger Backdoor Attacks (MTBAs), where multiple adversaries use different types of triggers to poison the same dataset.
Backdoor attacks have become a significant threat to the pre-training and deployment of deep neural networks (DNNs). While many methods have been proposed for detecting and mitigating backdoor attacks, most rely on identifying and removing the "shortcut" created by the backdoor, which links a specific source class to a target class. However, these methods can easily be bypassed by designing multiple backdoor triggers that create shortcuts everywhere, thus making detection more difficult.
In this study, we introduce Multi-Trigger Backdoor Attacks (MTBAs), where multiple adversaries use different types of triggers to poison the same dataset. We propose and investigate three types of multi-trigger attacks: \textit{parallel}, \textit{sequential}, and \textit{hybrid}. Our findings show that: 1) multiple triggers can coexist, overwrite, or cross-activate each other, and 2) MTBAs break the common shortcut assumption underlying most existing backdoor detection and removal methods, rendering them ineffective.
The figure above demonstrates the effectiveness of multi-trigger attacks at various poisoning rates (ranging from
To launch a parallel attack on ResNet-18 with 10 different triggers, run the following command:
python backdoor_mtba.py
As our work is currently under review, this is an open-source project contributed by the authors for the reproduction of the main results.