Skip to content

Latest commit

 

History

History
60 lines (60 loc) · 2.76 KB

2024-12-23-afzal24a.md

File metadata and controls

60 lines (60 loc) · 2.76 KB
title abstract layout series publisher issn id month tex_title firstpage lastpage page order cycles bibtex_author author date address container-title volume genre issued pdf extras
A Comprehensive Benchmark of Supervised and Self-supervised Pre-training on Multi-view Chest X-ray Classification
Chest X-ray analysis in medical imaging has largely focused on single-view methods. However, recent advancements have led to the development of multi-view approaches that harness the potential of multiple views for the same patient. Although these methods have shown improvements, it is especially difficult to collect large multi-view labeled datasets owing to the prohibitive annotation costs and acquisition times. Hence, it is crucial to address the multi-view setting in the low data regime. Pre-training is a critical component to ensure efficient performance in this low data regime, as evidenced by its improvements in natural and medical imaging. However, in the multi-view setup, such pre-training strategies have received relatively little attention and ImageNet initialization remains largely the norm. We bridge this research gap by conducting an extensive benchmarking study illustrating the efficacy of 10 strong supervised and self-supervised models pre-trained on both natural and medical images for multi-view chest X-ray classification. We further examine the performance in the low data regime by training these methods on 1%, 10%, and 100% fractions of the training set. Moreover, our best models yield significant improvements compared to existing state-of-the-art multi-view approaches, outperforming them by as much as 9.9%, 8.8% and 1.6% on the 1%, 10%, and 100% data fractions respectively. We hope this benchmark will spur the development of stronger multi-view medical imaging models, similar to the role of such benchmarks in other computer vision and medical imaging domains. As open science, we make our code publicly available to aid in the development of stronger multi-view models.
inproceedings
Proceedings of Machine Learning Research
PMLR
2640-3498
afzal24a
0
A Comprehensive Benchmark of Supervised and Self-supervised Pre-training on Multi-view Chest X-ray Classification
1
16
1-16
1
false
Afzal, Muhammad Muneeb and Khan, Muhammad Osama and Fang, Yi
given family
Muhammad Muneeb
Afzal
given family
Muhammad Osama
Khan
given family
Yi
Fang
2024-12-23
Proceedings of The 7nd International Conference on Medical Imaging with Deep Learning
250
inproceedings
date-parts
2024
12
23