-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Try adaptive attack? Try a larger perturbation budget? #1
Comments
Hi! Thank you for your comment. In our paper, we are quite explicit about the fact that we do not believe that this problem has a technical solution and thus a "good defense" will never exist. Quoting from our paper:
Similarly, adaptive protections against our attacks will only provide a false sense of security until someone comes up with a new method that circumvents those. As we said above, since the artists act first, they are necessarily at disadvantage.
In any case, the burden of the proof should be on the protections that claim to protect artists and not on the robustness analysis we performed.
We stick to that epsilon-ball to match the previous studies on these tools. I am not sure what you mean by "removing the perturbation for authorized users". My intuition is that these methods also remove larger perturbations. Glaze is a black-box protection and we do not know what perturbation size it uses. Our methods worked for their strongest protection, which judging by the perturbation strength, is likly to be larger than |
I see. Your main idea is that the old version of protected visual artworks is the most vulnerable part of the whole protection as the old perturbation can be saved and later broken by new circumvent methods. I agree this is true for real facial images, as one's face can not change over time. However, the adversary imitates artworks in order to make profits. How do you guarantee that one can implement a new circumvent method before the protected artworks are outdated? For example, an artist creates a popular cartoon character (AA) and uses new perturbations to protect the AA from being imitated. When the adversary manages to bypass the perturbation, he/she finds that AA is outdated and can earn nothing from imitating AA. I totally agree that current protective perturbation can be bypassed in the future. However, if the quality guarantee period of the protective perturbation is longer than the lifecycle of the protected artwork, then the protective perturbation is useful. To conclude, the protective perturbation aims to raise the cost of malicious imitation rather than prevent malicious imitation until the end of the world. |
From my perspective, this paper only discusses attacks that do not consider proposed defences. Are these purification methods themselves robust? Do these purification methods themselves guarantee outputs usable results when facing attacks toward them? How about designing adaptive attacks against IMPRESS++, Diffpure, and Noisy Upscaling?
Also, this paper only tested$\epsilon=8/255$ . How about a larger perturbation budget? I know this will lower the image quality. However, since the artist has a copy of the perturbation, he/she/other genders can remove the perturbation for the authorized users and refuse those who are unauthorized, which can prevent unwanted imitation and help track the leakage. Can these purification methods remove larger perturbations?
The text was updated successfully, but these errors were encountered: