Implementation of "Adversarial Scratches: Deployable Attacks to CNN Classifiers"
To run an experiment, call:
/code/experiments.py <dataset_name> <num_samples> <batch_size> <query_limit> <target_model_name> <attack_name> <attack_params> <optimizer> <optimizer_params> <output_path> <bool_use_wandb>
Example, to run the standard attack on imagenet:
/code/experiments.py imagenet 100 100 10000 torch bezier 133-1-3 cut-ngo 0 test false
For licensing reasons, we don't include the full TSRD dataset in the repo. However, we provide our segmentation masks and instructions on how to merge them with the dataset, which you can download from http://www.nlpr.ia.ac.cn/pal/trafficdata/recognition.html
In your file structure, you should have:
root_folder
|_ datasets
|_ tsrd (place here the downloaded files)
|_ tsrd_mask
|_ labels.txt (provided)
|_ additionalmask (provided)
|_ test (put here the tsrd images as per the file in the folder)
|_ test_byclass (put here the tsrd images as per the file in the folder)
|_ 000
|_ 001
|_ ...
|_ 057
|_ testmask (provided)
|_ train (put here the tsrd images as per the file in the folder)
|_ 000
|_ 001
|_ ...
|_ 057\