This repository contains the code for the paper
B. Glocker, C. Jones, M. Bernhardt, S. Winzeck
Algorithmic encoding of protected characteristics in chest X-ray disease detection models
eBioMedicine. Volume 89, 104467, March 2023.
The CheXpert imaging dataset together with the patient demographic information used in this work can be downloaded from https://stanfordmlgroup.github.io/competitions/chexpert/.
The MIMIC-CXR imaging dataset can be downloaded from https://physionet.org/content/mimic-cxr-jpg/2.0.0/ with the corresponding demographic information available from https://physionet.org/content/mimiciv/1.0/.
For running the code, we recommend setting up a dedicated Python environment.
Create and activate a Python 3 conda environment:
conda create -n chexploration python=3
conda activate chexploration
Install PyTorch using conda (for CUDA Toolkit 11.3):
conda install pytorch torchvision cudatoolkit=11.3 -c pytorch
Create and activate a Python 3 virtual environment:
virtualenv -p python3 <path_to_envs>/chexploration
source <path_to_envs>/chexploration/bin/activate
Install PyTorch using pip:
pip install torch torchvision
pip install matplotlib jupyter pandas seaborn pytorch-lightning scikit-learn scikit-image tensorboard tqdm openpyxl tabulate statsmodels
The code has been tested on Windows 10 and Ubuntu 18.04/20.04 operating systems. The data analysis does not require any specific hardware and can be run on standard laptop computers. The training and testing of the disease detection models requires a high-end GPU workstation. For our experiments, we used a NVIDIA Titan X RTX 24 GB.
In order to replicate the results presented in the paper, please follow these steps:
- Download the CheXpert dataset, copy the file
train.csv
to thedatafiles/chexpert
folder. Download the CheXpert demographics data, copy the fileCHEXPERT DEMO.xlsx
to thedatafiles/chexpert
folder. - Download the MIMIC-CXR dataset, copy the files
mimic-cxr-2.0.0-metdata.csv
andmimic-cxr-2.0.0-chexpert.csv
to thedatafiles/mimic
folder. Download the MIMIC-IV demographics data, copy the filesadmissions.csv
andpatients.csv
to thedatafiles/mimic
folder. - Run the notebooks
chexpert.sample.ipynb
andmimic.sample.ipynb
to generate the study data. - Run the notebook
chexpert.resample.ipynb
to perform test-set resampling.
To replicate the results on CheXpert:
- Adjust the variable
img_data_dir
to point to the CheXpert imaging data and run the following scripts:- Run the script
chexpert.disease.py
to train a disease detection model. - Run the script
chexpert.sex.py
to train a sex classification model. - Run the script
chexpert.race.py
to train a race classification model. - Run the script
chexpert.multitask.py
to train a multitask model.
- Run the script
- Run the notebook
chexpert.predictions.ipynb
to evaluate all the prediction models. - Run the notebook
chexpert.explorer.ipynb
for the unsupervised exploration of feature representations.
Additionally, there are scripts chexpert.sex.split.py
and chexpert.race.split.py
to run SPLIT on the disease detection model. The default setting in all scripts is to train a DenseNet-121 using the training data from all patients. The results for models trained on subgroups only can be produced by changing the path to the data files (e.g., using chexpert.sample.train.white.csv
and chexpert.sample.val.white.csv
instead of chexpert.sample.train.csv
and chexpert.sample.val.csv
).
To replicate the results on MIMIC-CXR, adjust the above scripts accordingly to point to the MIMIC imaging data and its corresponding data files (e.g., mimic.sample.train.csv
, mimic.sample.val.csv
and mimic.sample.test.csv
).
Note, the Python scripts also contain code for running the experiments using a ResNet-34 backbone which requires less GPU memory than DenseNet-121.
All trained models, feature embeddings and output predictions can be found here. These can be used to directly reproduce the results presented in our paper using the notebooks chexpert.predictions.ipynb
and chexpert.explorer.ipynb
.
The scripts for disease detection, sex classification, and race classification will produce outputs in csv format that can be processed by the evaluation code in the Jupyer notebooks. The notebooks will produce figures and plots either in png or pdf format.
Training the models for disease detection, sex classification, and race classification will take about three hours each on a high-end GPU workstation. Running the data analysis code provided in the notebooks takes several minutes on a standard laptop computer.
This work is supported through funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Grant Agreement No. 757173, Project MIRA, ERC-2017-STG) and by the UKRI London Medical Imaging & Artificial Intelligence Centre for Value Based Healthcare.
This project is licensed under the Apache License 2.0.