title | abstract | layout | series | publisher | issn | id | month | tex_title | firstpage | lastpage | page | order | cycles | bibtex_author | author | date | address | container-title | volume | genre | issued | extras | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Multimodal Image Registration Guided by Few Segmentations from One Modality |
Registration of multimodal images is challenging, especially when dealing with different anatomical structures and samples without segmentations. The main difficulty arises from the use of registration loss functions that are inadequate in the absence of corresponding regions. In this work, we present the first registration and segmentation approach tailored to this challenge. In particular, we assume the practically highly relevant scenario that only a limited number of segmentations are available for one modality and none for the other. First, we augment our few segmented samples using unsupervised deep registration within one modality, thereby providing many anatomically plausible samples to train a segmentation network. The resulting segmentation network then allows us to train a segmentation network on the target modality without available segmentations by using an unsupervised domain adaptation architecture. Finally, we train a deep registration network to register multimodal image pairs purely based on predictions of their segmentation networks. Our work demonstrates that using a small number of segmentations from one modality enables training a segmentation network on a target modality without the need for additional manual segmentations on that modality. Additionally, we show that registration based on these segmentations provides smooth and accurate deformation fields on anatomically different image pairs, unlike previous methods. We evaluate our approach on 2D medical image segmentation and registration between knee DXA and X-ray images. Our experiments show that our approach outperforms existing methods. Code is available at https://github.com/uncbiag/SegGuidedMMReg. |
inproceedings |
Proceedings of Machine Learning Research |
PMLR |
2640-3498 |
demir24a |
0 |
Multimodal Image Registration Guided by Few Segmentations from One Modality |
367 |
390 |
367-390 |
367 |
false |
Demir, Basar and Niethammer, Marc |
|
2024-12-23 |
Proceedings of The 7nd International Conference on Medical Imaging with Deep Learning |
250 |
inproceedings |
|