Skip to content

Latest commit

 

History

History
59 lines (59 loc) · 2.23 KB

2024-12-23-kulkarni24a.md

File metadata and controls

59 lines (59 loc) · 2.23 KB
title abstract layout series publisher issn id month tex_title firstpage lastpage page order cycles bibtex_author author date address container-title volume genre issued pdf extras
Hidden in Plain Sight: Undetectable Adversarial Bias Attacks on Vulnerable Patient Populations
The proliferation of artificial intelligence (AI) in radiology has shed light on the risk of deep learning (DL) models exacerbating clinical biases towards vulnerable patient populations. While prior literature has focused on quantifying biases exhibited by trained DL models, demographically targeted adversarial bias attacks on DL models and its implication in the clinical environment remains an underexplored field of research in medical imaging. In this work, we demonstrate that demographically targeted label poisoning attacks can introduce undetectable underdiagnosis bias in DL models. Our results across multiple performance metrics and demographic groups like sex, age, and their intersectional subgroups show that adversarial bias attacks demonstrate high-selectivity for bias in the targeted group by degrading group model performance without impacting overall model performance. Furthermore, our results indicate that adversarial bias attacks result in biased DL models that propagate prediction bias even when evaluated with external datasets.
inproceedings
Proceedings of Machine Learning Research
PMLR
2640-3498
kulkarni24a
0
Hidden in Plain Sight: Undetectable Adversarial Bias Attacks on Vulnerable Patient Populations
793
821
793-821
793
false
Kulkarni, Pranav and Chan, Andrew and Navarathna, Nithya and Chan, Skylar and Yi, Paul and Parekh, Vishwa Sanjay
given family
Pranav
Kulkarni
given family
Andrew
Chan
given family
Nithya
Navarathna
given family
Skylar
Chan
given family
Paul
Yi
given family
Vishwa Sanjay
Parekh
2024-12-23
Proceedings of The 7nd International Conference on Medical Imaging with Deep Learning
250
inproceedings
date-parts
2024
12
23