---You are on main branch---
This repository has two branches: main and ISBI. main branch is for our work at Medical Image Analysis. ISBI branch is for our work at ISBI 2022 (finalist for best paper award).
- Superficial white matter parcellation on real data: if you want to use our pre-trained model to parcellate your own data, please use SupWMA_TwoStage.sh in main branch. It provides the two-stage SupWMA model trained with contrastive learning. Please follow the instruction here.
- Train your own model: You can start with ISBI branch, where it provides code for one-stage training with and without contrastive learning. If you are also interested in two-stage training, you can check training code in main branch.
This repository releases the source code, pre-trained model and testing sample for the work, "Superficial White Matter Analysis: An Efficient Point-cloud-based Deep Learning Framework with Supervised Contrastive Learning for Consistent Tractography Parcellation across Populations and dMRI Acquisitions", which is accepted by Medical Image Analysis.
The contents of this repository are released under an Slicer license.
conda create --name SupWMA python=3.6.10
conda activate SupWMA
pip install conda install pytorch==1.7.0 torchvision==0.8.0 torchaudio==0.7.0 cudatoolkit=10.1 -c pytorch
pip install git+https://github.com/SlicerDMRI/whitematteranalysis.git
pip install h5py
pip install sklearn
Train with our dataset
- Download
TrainData_TwoStage.tar.gz
(https://github.com/SlicerDMRI/SupWMA-TrainingData/releases) to./
, andtar -xzvf TrainData_TwoStage.tar.gz
- Run
sh train_supwma_s1.sh && sh train_supwma_s2.sh
Your input streamline features should have size of (number_streamlines, number_points_per_streamline, 3), and size of labels is (number_streamlines, ). You can save/load features and labels using .h5 files.
You can start training your custom dataset using the ISBI branch, where we provide the training code for one-stage training with and without contrastive learning. The training code in main branch is for two-stage training with contrastive learning.
We calculated the accuracy, precision, recall and f1 on 198 swm clusters and one "non-swm" cluster (199 classes). One "non-swm" cluster consists of swm outlier clusters and others (dwm).
Highly recommended to use the pre-trained model here (two-stage with contrastive learning) to parcellate your own data.
- Install 3D Slicer (https://www.slicer.org) and SlicerDMRI (http://dmri.slicer.org).
- Download
TrainedModels_TwoStage.tar.gz
(https://github.com/SlicerDMRI/SupWMA/releases) to./
, andtar -xzvf TrainedModel_TwoStage.tar.gz
- Download
TestData.tar.gz
(https://github.com/SlicerDMRI/SupWMA/releases) to the./
, andtar -xzvf TestData.tar.gz
- Run
sh SupWMA_TwoStage.sh
Vtp files of 198 superficial white matter clusters and one Non-SWM cluster are in ./SupWMA-TwoStage_parcellation_results/[subject_id]/[subject_id]_prediction_clusters_outlier_removed
.
You can visualize them using 3D Slicer.
Please cite the following papers for using the code and/or the training data :
Tengfei Xue, Fan Zhang, Chaoyi Zhang, Yuqian Chen, Yang Song, Alexandra J. Golby, Nikos Makris, Yogesh Rathi, Weidong Cai, and
Lauren J. O’Donnell. 2023. “Superficial White Matter Analysis: An Efficient Point-Cloud-Based Deep Learning Framework with
Supervised Contrastive Learning for Consistent Tractography Parcellation across Populations and dMRI Acquisitions.”
Medical Image Analysis 85: 102759.
Tengfei Xue, Fan Zhang, Chaoyi Zhang, Yuqian Chen, Yang Song, Nikos Makris, Yogesh Rathi, Weidong Cai, and Lauren J. O’Donnell.
Supwma: Consistent and Efficient Tractography Parcellation of Superficial White Matter with Deep Learning.
In 2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI), IEEE, 2022.
Zhang, F., Wu, Y., Norton, I., Rathi, Y., Makris, N., O'Donnell, LJ.
An anatomically curated fiber clustering white matter atlas for consistent white matter tract parcellation across the lifespan.
NeuroImage, 2018 (179): 429-447
For projects using Slicer and SlicerDMRI please also include the following text (or similar) and citations:
-
How to cite the Slicer platform
-
An example of how to cite SlicerDMRI (modify the first part of the sentence according to your use case):
"We performed diffusion MRI tractography and/or analysis and/or visualization in 3D Slicer (www.slicer.org) via the SlicerDMRI project (dmri.slicer.org) (Norton et al. 2017)."
Fan Zhang, Thomas Noh, Parikshit Juvekar, Sarah F Frisken, Laura Rigolo, Isaiah Norton, Tina Kapur, Sonia Pujol, William Wells III, Alex Yarmarkovich, Gordon Kindlmann, Demian Wassermann, Raul San Jose Estepar, Yogesh Rathi, Ron Kikinis, Hans J Johnson, Carl-Fredrik Westin, Steve Pieper, Alexandra J Golby, Lauren J O’Donnell. SlicerDMRI: Diffusion MRI and Tractography Research Software for Brain Cancer Surgery Planning and Visualization. JCO Clinical Cancer Informatics 4, e299-309, 2020.
Isaiah Norton, Walid Ibn Essayed, Fan Zhang, Sonia Pujol, Alex Yarmarkovich, Alexandra J. Golby, Gordon Kindlmann, Demian Wassermann, Raul San Jose Estepar, Yogesh Rathi, Steve Pieper, Ron Kikinis, Hans J. Johnson, Carl-Fredrik Westin and Lauren J. O'Donnell. SlicerDMRI: Open Source Diffusion MRI Software for Brain Cancer Research. Cancer Research 77(21), e101-e103, 2017.