Skip to content

Latest commit

 

History

History
157 lines (117 loc) · 11 KB

README.md

File metadata and controls

157 lines (117 loc) · 11 KB

OpenKBP Grand Challenge

The open-kbp repository provides code that was intended to help get participants started with developing dose prediction models for the OpenKBP Challenge, which is summarized in our paper. This repository has been repurposed to provide our community with an open framework for developing dose prediction methods. Note that researchers who are interested in developing their own plan optimization methods should refer to the open-kbp-opt repository.

Advice: The repository can be used on either a local machine or in the cloud (for free) using Google Colab. Google Colab is a great way to compete in OpenKBP without putting a burden on your existing hardware. The service provides high-quality CPUs and GPUs for free, however, your sessions are limited to consecutive 12 hours [Frequently asked questions].

Citation

Please use our paper as the citation for this dataset or code repository:

A. Babier, B. Zhang, R. Mahmood, K.L. Moore, T.G. Purdie, A.L. McNiven, T.C.Y. Chan, "OpenKBP: The open-access knowledge-based planning grand challenge and dataset," Medical Physics, Vol. 48, pp. 5549-5561, 2021.

Table of Contents

Data

The details of the provided data are available in our paper OpenKBP: The open-access knowledge-based planning grand challenge and dataset. In short, we provide data for 340 patients who were treated for head-and-neck cancer with intensity modulated radiation therapy. The data is split into training (n=200), validation (n=40), and testing (n=100) sets. Every patient in these datasets has a dose distribution, CT images, structure masks, a feasible dose mask (i.e., mask of where dose can be non-zero), and voxel dimensions.

What this code does

This code will train a small neural network to predict dose. There are five PY files that are required to run the main_notebook.ipynb and main.py files. Below, we summarize the functionality of each PY file, but more details are provided in the files themselves.

  • data_loader.py: Contains the DataLoader class, which loads the data from the dataset in a standard format. Several data formats (e.g., dose-volume histogram) are available to cater to different modeling techniques.
  • dose_evaluation_class.py: Contains the EvaluateDose class, which is used to evaluate the competition metrics.
  • general_functions.py: Contain several functions with a variety of purposes.
  • network_architectures.py: Contains the DefineDoseFromCT class, which builds the architecture for a basic U-Net model. This class is inherited by the PredictionModel class. Please note that we intentionally included a network architecture that is not state-of-the-art. It is only included to serve as a placeholder for your more sophisticated models.
  • network_functions.py: Contains the PredictionModel class, which applies a series of methods on a model constructed in DefineDoseFromCT.

Prerequisites

The following are required to run the given notebook, however, you may use any hardware or software you'd like.

For running on Google Colab

  • Standard Google account

For running on a local machine

  • Linux
  • Python 3.10.9
  • NVIDIA GPU with CUDA and CuDNN (recommended)

Created folder structure

This repository will create a file structure that branches from a directory called open-kbp. The file structure will keep information about predictions from a model (called baseline in this example) and the model itself in the results directory. All the data from the OpenKBP competition (with the original train/validation/test splits) is available under the directory called provided-data. This code will also make a directory called submissions to house the zip files that can be submitted to the leader boards on CodaLab. Use this folder tree as a reference (it will more or less build itself).

open-kbp
├── provided-data
│   ├── train-pats
│   │   ├── pt_*
│   │       ├── *.csv
│   ├── valid-pats
│   │   ├── pt_*
│   │       ├── *.csv
│   └── test-pats
│       ├── pt_*
│           ├── *.csv
├── results
│   ├── baseline
│   │   ├── models
│   │   │   ├── epoch_*.h5
│   │   ├── validation-predictions
│   │   │   ├── pt_*.csv
│   │   └── test-predictions
│   │       ├── pt_*.csv
│   ├── **Structure repeats when new model is made**
└── submissions
    ├── baseline.zip
    ├── **Structure repeats when new model is made**   

Getting started

Below, we provide instructions for setting up this repository in Google Colab and on a local machine.

Getting started in Colab

This should be the simplest way to compete in OpenKBP because the software required for dose prediction is installed in the cloud. It also means you can be competitive in OpenKBP without expensive hardware.

  1. Head to Colab
  2. Select 'GitHub' → paste the link to main_notebook.ipynb → ENTER → click the file name
  3. In the Google Colab toolbar select: Runtime → Change Runtime. This will open another popup where you should ensure the runtime type is Python 3 and the hardware accelerator is GPU.

You're all set for executing the code.

Getting started on a local machine

  1. Make a virtual environment and activate it

    virtualenv -p python3 open-kbp-venv
    source open-kbp-venv/bin/activate
    
  2. Clone this repository, navigate to its directory, and install the requirements. Note, that to run Tensorflow 2.1 with a GPU, you may need to build Tensorflow 2.1 from source. The official instructions to build from source are here, but I found the third party guide here more useful.

    git clone https://github.com/ababier/open-kbp
    cd open-kbp
    pip3 install -r requirements.txt
    

Running the code

Running the code in either platform should be straightforward. Any errors are likely the result of data being in an unexpected directory. If the code is running correctly then the progress of the neural network should print out to an output cell (Colab) or the commandline (local machine).

Running the code in Colab

In the Google Colab toolbar select: Runtime > Run all; you can also use the key-binding <Ctrl+F9>.

OR

Run each cell individually by clicking the play button in each cell; you can also use the key binding <Shift+Enter> to run a highlighted cell.

Running the code on local machine

Run the main file in your newly created virtual environment. python3 main.py Alternatively, you may run the notebook in Jupyter Notebook or Jupyter Lab locally, but only after commenting out the commands related to Google Drive and changing the paths for where the provided data is stored and where the results are saved.

Competition results

The OpenKBP Challenge attracted 195 participants from 28 counties. The competition started February 21, 2020 and concluded on June 1, 2020. A total of 1750 submissions were made to the validation phase by the 44 teams (consisting of 73 people) who made at least 1 submission. In the testing phase, 28 teams (consisting of 54 people) made submissions. The top teams in this competition are highlighted below. **Note that the dose for patients in the validation and testing data was only published on June 23, 2020 after the Challenge concluded.

First place

Dose and DVH Stream: Shuolin Liu, Jingjing Zhang, Teng Li1, Hui Yan, Jianfei Liu, LSL AnHui University, Anhui University, China. [GitHub Repository] [Paper]

Runners-up

Dose Stream: Mary P. Gronberg, Skylar S. Gay, Tucker J. Netherton, Dong Joo Rhee, Laurence E. Court, Carlos E. Cardenas, SuperPod, MD Anderson Cancer Center, United States. [Paper]

DVH Stream: Lukas Zimmermann, Erik Faustmann, Christian Ramsl, Dietmar Georg, Gerd Heilemann, PTV - Prediction Team Vienna, Medical University of Vienna, Austria. [Paper]

Final testing phase leaderboard

This leaderboard contains the final results of this challenge, which is the first controlled and blinded test of KBP method implementations from several institutions.

Researchers can still sign up and submit to a live leaderboard on CodaLab. However, since the results are no longer blinded there is no way to ensure the validation and test set was used as intended (i.e., without any peaking at test data).

Sample research that uses the OpenKBP dataset

Competition organizers

OpenKBP was organized by Aaron Babier, Binghao Zhang, Rafid Mahmood, and Timothy Chan (University of Toronto, Canada); Andrea McNiven and Thomas Purdie (Princess Margaret Cancer Center, Canada); Kevin Moore (UC San Diego, USA). This challenge was supported by The American Association of Physicists in Medicine.