Skip to content

Latest commit

 

History

History
70 lines (70 loc) · 2.89 KB

2024-12-23-varma24a.md

File metadata and controls

70 lines (70 loc) · 2.89 KB
title abstract layout series publisher issn id month tex_title firstpage lastpage page order cycles bibtex_author author date address container-title volume genre issued pdf extras
VariViT: A Vision Transformer for Variable Image Sizes
Vision Transformers (ViTs) have emerged as the state-of-the-art architecture in representation learning, leveraging self-attention mechanisms to excel in various tasks. ViTs split images into fixed-size patches, constraining them to a predefined size and necessitating pre-processing steps like resizing, padding, or cropping. This poses challenges in medical imaging, particularly with irregularly shaped structures like tumors. A fixed bounding box crop size produces input images with highly variable foreground-to-background ratios. Resizing medical images can degrade information and introduce artifacts, impacting diagnosis. Hence, tailoring variable-sized crops to regions of interest can enhance feature representation capabilities. Moreover, large images are computationally expensive, and smaller sizes risk information loss, presenting a computation-accuracy tradeoff. We propose VariViT, an improved ViT model crafted to handle variable image sizes while maintaining a consistent patch size. VariViT employs a novel positional embedding resizing scheme for a variable number of patches. We also implement a new batching strategy within VariViT to reduce computational complexity, resulting in faster training and inference times. In our evaluations on two 3D brain MRI datasets, VariViT surpasses vanilla ViTs and ResNet in glioma genotype prediction and brain tumor classification. It achieves F1-scores of 75.5% and 76.3%, respectively, learning more discriminative features. Our proposed batching strategy reduces computation time by up to 30% compared to conventional architectures. These findings underscore the efficacy of VariViT in image representation learning.
inproceedings
Proceedings of Machine Learning Research
PMLR
2640-3498
varma24a
0
VariViT: A Vision Transformer for Variable Image Sizes
1571
1583
1571-1583
1571
false
Varma, Aswathi and Shit, Suprosanna and Prabhakar, Chinmay and Scholz, Daniel and Li, Hongwei Bran and Menze, Bjoern and Rueckert, Daniel and Wiestler, Benedikt
given family
Aswathi
Varma
given family
Suprosanna
Shit
given family
Chinmay
Prabhakar
given family
Daniel
Scholz
given family
Hongwei Bran
Li
given family
Bjoern
Menze
given family
Daniel
Rueckert
given family
Benedikt
Wiestler
2024-12-23
Proceedings of The 7nd International Conference on Medical Imaging with Deep Learning
250
inproceedings
date-parts
2024
12
23