official/unofficial open source code/dataset for backdoor attack and defense
TABOR https://github.com/UsmannK/TABOR
Neural Cleanse https://github.com/trx14/TrojanNet/tree/master/code/Detection/neural_cleanese
ABS https://github.com/naiyeleo/ABS
NEO https://github.com/sakshiudeshi/Neo
On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping https://github.com/Sanghyun-Hong/Gradient-Shaping
Demon in the Variant: Statistical Analysis of DNNs for Robust Backdoor Contamination Detection https://github.com/TDteach/Demon-in-the-Variant
DL-TND https://github.com/wangren09/TrojanNetDetector
NNoculation https://github.com/akshajkumarv/NNoculation
Deepsweep https://github.com/YiZeng623/DeepSweep
Neural Cleanse https://github.com/VinAIResearch/input-aware-backdoor-attack-release
Fine-pruning https://github.com/VinAIResearch/input-aware-backdoor-attack-release
Februus https://github.com/AdelaideAuto-IDLab/Februus
Adversarial Unlearning of Backdoors via Implicit Hypergradient https://github.com/YiZeng623/I-BAU_Adversarial_Unlearning_of-Backdoors_via_implicit_Hypergradient
Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness https://github.com/IBM/model-sanitization
Backdoor Scanning for Deep Neural Networks through K-Arm Optimization https://github.com/PurduePAML/K-ARM_Backdoor_Optimization
Defending-Neural-Backdoors-via-Generative-Distribution-Modeling https://github.com/superrrpotato/Defending-Neural-Backdoors-via-Generative-Distribution-Modeling
ULP https://github.com/UMBCvision/Universal-Litmus-Patterns
STRIP https://github.com/VinAIResearch/input-aware-backdoor-attack-release
STRIP https://github.com/garrisongys/STRIP
Spectral https://github.com/AdvDoor/AdvDoor
Comprehensive https://github.com/MadryLab/backdoor_data_poisoning
Activation Clustering https://github.com/AdvDoor/AdvDoor
Activation Clustering https://github.com/Trusted-AI/adversarial-robustness-toolbox/blob/main/art/defences/detector/poison/activation_defence.py
Spectre https://github.com/SewoongLab/spectre-defense
ART https://github.com/Trusted-AI/adversarial-robustness-toolbox