This repository contains the code for the paper "Diffence: Fencing Membership Privacy With Diffusion Models," accepted by NDSS 2025.
Diffence is a robust plug-and-play defense mechanism designed to enhance the membership privacy of both undefended models and models trained with state-of-the-art defenses, without compromising model utility.
-
Clone the repository:
git clone https://github.com/bujuef/Diffence.git cd Diffence
-
Create a conda environment and install dependencies:
conda env create -f environment.yaml conda activate diffence-env
-
If you do not have conda installed, follow the instructions on their official documentation.
-
Navigate to the folder of the dataset to be tested, e.g., CIFAR-10:
cd cifar10
-
Download and partition the dataset:
python data_partition.py
-
Obtain the diffusion model used for Diffence:
We provide our pretrained diffusion model checkpoints here. Copy the
diff_models
to the correspondingdiff_defense
folder, e.g.,cifar10/diff_defense/diff_models
.(Optional) Train the diffusion model using this repository.
-
Obtain the undefended model and models with existing defenses:
Our pretrained models are available here. Copy them to the
final-all-models
folder, e.g.,cifar10/final-all-models/resnet/selena.pth.tar
.(Optional) You can retrain specific defended models using the commands listed in
all-train-all.sh
. -
Test model accuracy and membership privacy:
cd evaluate_MIAs # Navigate to the test script folder bash evaluate_mia.sh --defense [defense name] # defense name in {undefended, selena, advreg, hamp, relaxloss}
After completion, the results of the above experiments will be saved in the ./results
folder.
The results will be saved in Diffence/[dataset_name]/evaluate_MIAs/results
. For example, selena
and selena_w_diffence
correspond to the results of using SELENA defense alone and deploying Diffence on top of it, respectively.
The implementation of Diffence builds upon code from the following repositories:
- https://github.com/DependableSystemsLab/MIA_defense_HAMP
- https://github.com/JinyiW/GuidedDiffusionPur
We greatly appreciate the contributions from these works.