Skip to content

Training and applying AI models for segmenting and characterizing brain tumours given 3D neuroimaging data

License

Notifications You must be signed in to change notification settings

McMasterAI/Radiology-and-AI

Repository files navigation

Radiology-and-AI

Repository for the McMaster AI society Radiology-and-AI projects team. This repository hold the Tumour Segmentation training, evaluation, and visualization tools we have created. I also holds the documentation and tutorials we have created to assist in using the tools and further our understanding of Radiology and AI research.

Installation

To use our tools, clone this repository as well as the MedicalZooPytorch https://github.com/black0017/MedicalZooPytorch repository into the same folder. Install the reqiored packages for the folders by navigating to each folder and running pip install -r requirements.txt. Next, navigate into the Radiology-and-AI directory. Following the examples in the Examples Folder, and in the usage.ipynb notebook found in the Notebooks folder you can use our training, evaluation, and visualization tools. When attempting to run our code be sure to add the Radiology_and_AI and MedicalZooPytorch folders to your path using the sys.path.append('../Radiology_and_AI') and sys.path.append('../../MedicalZooPytorch') commands (the exact path you enter is dependent on what directory you are running the code from and where you installed the libraries).

Features

Training and Evaluation Tools

Using our Training Tool and Evaluation Tool you can easily train and evaluate a Tumour Segmentation model and apply many types of augmentations and normalizations. For information on usage refer to the usage.ipynb Tutorial in the Notebooks folder or the eval_example.py and training_example.py files in the Examples folder.

Visualization Tool

Using our Visualization Tool it is simple to generate informative graphics and output the segmentations your model generates. The visualization tool can generate 2d slices and 3d gifs of your true and predicted segmentations, as well output Nifti files of the transformed input data and segmentations. Below are some exaples of graphics generated by our visualization tool using models we trained with the Training Tool.

image BraTS20_Training_043

Research Motivation

The below is information relating to the original motivating reason for taking on this project.

Proposed Models

Summary of ML models

The intended tasks include:

  • Automatic segmentation of brain metastasis/tumors and edema (swelling)
  • Prediction of tumor type (multiclass classification)
  • Prediction of immunotherapy response (binary classification)
  • Prediction of tumor pseudoprogression vs true tumor progression

The intended models to do these tasks include deep convolutional neural network based architectures, including deep residual networks. Transfer learning will be used. It is expected that this large dataset will result in superior performance compared to previous similar work.

Project Background

9-50% of patients with malignant tumours develop brain metastasis, and 30% of these patients do so before the primary cancer is known. For these patients it is essential to identify quickly and accurately the primary tumour for the staging and treatment planning. Current methods for do so are lacking, and previous work has demonstrated that Articifial Intelligence models show strong ability in differentiating between tumour types. Artificial Intelligence can take into account features of tumours not perceptible to the human eye and which current diagnostic techniques do not take into account, thus showing the technologies potential for providing radiologists and clinicians with the ability to more accurately predict the primary tumor in patients with brain metastasis.

Recent studies have also shown Artificial Intelligence has the potential not only to determine primary tumour types, but also predict tumour response to radiotherapy. Identification of patients who would be most likely to benefit from costly and difficult treatments like immunotherapy would also be of great benefit to patients an radiologists alike.

Additionally, patients diagnosed with glioblastomas often face challenges in treatment due to changes in the imaging of the tumour showing ambiguous pictures of progression of the cancer versus improvement. A tool which can differentiate which patients are undergoing tumour progression versus so called "pseudoprogression" could be highly benficial in treatment of glioblastoma.

Research and Design Methods

The research team consists of investigators with background in machine learning and prior experience with similar projects. Additionally, the principal investigator (Dr. Fateme Salehi) has experience with radiomics and has recently presented her work at the American Society of Neuroradiology as well as the Roentgen Society of North America (RSNA). Leveraging her radiomics expertise, and working with a team of experts, she plans to complete the project and publish the results in AJNR (American Journal of Neuroradiology). The data will be presented at upcomingrelated conferences as well. Once machine learning models are developed, they will be applied to patients in future as further images are acquired to facilitate evaluation of medical images. These models have the potential to impact patient treatment decisions early on, with regards to response to treatment or lack thereof. The machine learning models will also be applied to patients presenting with new brain tumors of unknown primary origin, with the aim to determine the primary site of malignancy, therefore reducing the need for brain biopsy and/or further imaging and radiation exposure. We will submit an application for the prospective arm of the study before starting any work in future, and the current arm of the study will be purely retrospective.

About

Training and applying AI models for segmenting and characterizing brain tumours given 3D neuroimaging data

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •