Skip to content

Latest commit

 

History

History
53 lines (53 loc) · 2.02 KB

2021-07-21-blum21a.md

File metadata and controls

53 lines (53 loc) · 2.02 KB
title abstract layout series publisher issn id month tex_title firstpage lastpage page order cycles bibtex_author author date address container-title volume genre issued pdf extras
Robust learning under clean-label attack
We study the problem of robust learning under clean-label data-poisoning attacks, where the attacker injects (an arbitrary set of) \emph{correctly-labeled} examples to the training set to fool the algorithm into making mistakes on \emph{specific} test instances at test time. The learning goal is to minimize the attackable rate (the probability mass of attackable test instances), which is more difficult than optimal PAC learning. As we show, any robust algorithm with diminishing attackable rate can achieve the optimal dependence on $\epsilon$ in its PAC sample complexity, i.e., $O(1/\epsilon)$. On the other hand, the attackable rate might be large even for some optimal PAC learners, e.g., SVM for linear classifiers. Furthermore, we show that the class of linear hypotheses is not robustly learnable when the data distribution has zero margin and is robustly learnable in the case of positive margin but requires sample complexity exponential in the dimension. For a general hypothesis class with bounded VC dimension, if the attacker is limited to add at most $t=O(1/\epsilon)$ poison examples, the optimal robust learning sample complexity grows linearly with $t$.
inproceedings
Proceedings of Machine Learning Research
PMLR
2640-3498
blum21a
0
Robust learning under clean-label attack
591
634
591-634
591
false
Blum, Avrim and Hanneke, Steve and Qian, Jian and Shao, Han
given family
Avrim
Blum
given family
Steve
Hanneke
given family
Jian
Qian
given family
Han
Shao
2021-07-21
Proceedings of Thirty Fourth Conference on Learning Theory
134
inproceedings
date-parts
2021
7
21