title | abstract | layout | series | publisher | issn | id | month | tex_title | firstpage | lastpage | page | order | cycles | bibtex_author | author | date | address | container-title | volume | genre | issued | extras | ||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Robust learning under clean-label attack |
We study the problem of robust learning under clean-label data-poisoning attacks, where the attacker injects (an arbitrary set of) \emph{correctly-labeled} examples to the training set to fool the algorithm into making mistakes on \emph{specific} test instances at test time. The learning goal is to minimize the attackable rate (the probability mass of attackable test instances), which is more difficult than optimal PAC learning. As we show, any robust algorithm with diminishing attackable rate can achieve the optimal dependence on |
inproceedings |
Proceedings of Machine Learning Research |
PMLR |
2640-3498 |
blum21a |
0 |
Robust learning under clean-label attack |
591 |
634 |
591-634 |
591 |
false |
Blum, Avrim and Hanneke, Steve and Qian, Jian and Shao, Han |
|
2021-07-21 |
Proceedings of Thirty Fourth Conference on Learning Theory |
134 |
inproceedings |
|