-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
defense methods #12
Comments
Hi @tanlingp, Are you inquiring about how to evaluate anti-defense performance? Defense methods have multiple categories, like adversarially trained models, defensive purification, etc. For adversarial-trained models, you simply need to input the generated adversarial examples into these models for assessment. On the other hand, for defenses like purification, the process involves sending your attack samples through purification networks before passing them on to the downstream victim models. In the robustness-on-defensive-approaches section, we have compiled a list of source repositories for the defenses we tested. I recommend consulting these sources for more detailed information. Hope this helps. |
For the Robustness on defensive approaches section, run the original code. not quite sure how to run it |
To run the code for the defenses listed in the "Robustness on defensive approaches" section, I would recommend checking the respective source repositories. The README files in these repositories usually provide detailed instructions on how to run their code. Consulting the README will guide you through the process effectively. |
Wanted to ask about how to verify performance against samples for defense methods, you didn't quite understand that part. Looking forward to your reply, thanks a million.
The text was updated successfully, but these errors were encountered: