Skip to content

Latest commit

 

History

History
52 lines (33 loc) · 2.26 KB

training.md

File metadata and controls

52 lines (33 loc) · 2.26 KB

Training your own weights

Download training datasets

  1. Download and extract the TotalSegmentator dataset to ./training_data/total_segmentator folder.

  2. Download and extract the FLARE21 training dataset to ./training_data/flare21 folder. You should have a ./training_data/flare21/TrainingImg folder and ./training_data/flare21/TrainingMask folder.

Setup nnUNet environment

  1. Create a folder ./nnunet_data/nnUNet_raw_data_base
  2. Create a folder ./nnunet_data/nnUNet_preprocessed
  3. Create a folder ./nnunet_data/nnUNet_trained_nmodels
  4. Run create_nnunet_dataset.ipynb to setup the nnUnet dataset

Train model

  1. setup environtmnent variables
export nnUNet_raw_data_base="./nnunet_data/nnUNet_raw_data_base"
export nnUNet_preprocessed="./nnunet_data/nnUNet_preprocessed"
export RESULTS_FOLDER="./nnunet_data/nnUNet_trained_models"
  1. Run plan and preprocess nnUNet_plan_and_preprocess -t 773 --verify_dataset_integrity
  2. Train folds 0-4, you can use the train.sh script. Specify the fold with the first arg, and the gpu to train on with the second arg

Fixing Total Segmentator dataset

If you get errors like ITK ERROR: ITK only supports orthonormal direction cosines. No orthonormal definition found!

The first option is to install SimpleITK=2.0.2, however if you are using python >3.7, that version of SimpleITK is not available. The second option is to fix the dataset, using the steps below.

  1. build the docker image
docker build -t fix_total_segmentator:latest ./TotalSegFix
  1. run the docker image
docker run --rm -v $(pwd)/training_data/total_segmentator:/data -v $(pwd)/training_data/total_segmentator_fix:/out_data  -u $(id -u ${USER}):$(id -g ${USER}) fix_total_segmentator:latest
  1. copy the fixed dataset back to the training_data folder, or used new training_data/total_segmentator_fix instead of training_data/total_segmentator in the training instructions above.

Using your model weights

You can use nnUNet_predict with your existing nnUNet envirionment. To use DICOM as input or as output instead of nifti, refer to the container/run.py script for an example.