This is a toolbox for X-ray novel view synthesis (NVS) and computed tomography (CT) reconstruction. This repo supports 9 state-of-the-art algorithms including 6 NeRF-based methods, 2 optimization-based methods, and 1 analytical method. We also provide code for fancy visualization and data generation to help your research. If you find this repo useful, please give it a star ⭐ and consider citing our paper. Thank you.
- 2024.06.03 : Code for traditional methods has been released. 🚀
- 2024.06.03 : Code for fancy visualization and data generation has been released. 🚀
- 2024.06.02 : Data, code, models, and training logs have been realeased. Feel free to use them :)
- 2024.02.26 : Our paper has been accepted by CVPR 2024. Code and pre-trained models will be released to the public before the start date of CVPR 2024 (2024.06.19). Stay tuned! 🎉 🎊
- 2023.11.21 : The benchmark of X3D at the paper-with-code website has been set up. You are welcome to make a comparison. 🚀
- 2023.11.21 : Our paper is on arxiv now. We will develop this repo into a baseline for X-ray novel view synthesis and CT reconstruction. All code, models, data, and training logs will be released. 💫
Supported algorithms:
We recommend using Conda to set up an environment.
# Create environment
conda create -n sax_nerf python=3.9
conda activate sax_nerf
# Install pytorch (hash encoder requires CUDA v11.3)
pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0 --extra-index-url https://download.pytorch.org/whl/cu113
# Install other packages
pip install -r requirements.txt
We suggest you install TIGRE toolbox (2.3 version) for executing traditional CT reconstruction methods and synthesize your own CT data if you plan to do so. Please note that TIGRE v2.5 might stuck when CT is large.
# Download TIGRE
wget https://github.com/CERN/TIGRE/archive/refs/tags/v2.3.zip
unzip v2.3.zip
rm v2.3.zip
# Install TIGRE
pip install cython==0.29.25
pip install numpy==1.21.6
cd TIGRE-2.3/Python/
python setup.py develop
Download our processed datasets from Google drive or Baidu disk. Then put the downloaded datasets into the folder data/
as
|--data
|--chest_50.pickle
|--abdomen_50.pickle
|--aneurism_50.pickle
|--backpack_50.pickle
|--bonsai_50.pickle
|--box_50.pickle
|--carp_50.pickle
|--engine_50.pickle
|--foot_50.pickle
|--head_50.pickle
|--leg_50.pickle
|--pancreas_50.pickle
|--pelvis_50.pickle
|--teapot_50.pickle
|--jaw_50.pickle
You can directly download our pre-trained models from Google drive or Baidu disk. Then put the downloaded models into the folder pretrained/
and run
# SAX-NeRF
python test.py --method Lineformer --category chest --config config/Lineformer/chest_50.yaml --weights pretrained/chest.tar --output_path output
# FDK
python3 eval_traditional.py --algorithm fdk --category chest --config config/FDK/chest_50.yaml
# SART
python3 eval_traditional.py --algorithm sart --category chest --config config/SART/chest_50.yaml
# ASD_POCS
python3 eval_traditional.py --algorithm asd_pocs --category chest --config config/ASD_POCS/chest_50.yaml
We provide the training logs on all scenes for your convenience to debug. Please download the training logs from Google dive of Baidu disk.
# SAX-NeRF
python train_mlg.py --config config/Lineformer/chest_50.yaml
# NeRF
python train.py --config config/nerf/chest_50.yaml
# Intratomo
python train.py --config config/intratomo/chest_50.yaml
# NAF
python train.py --config config/naf/chest_50.yaml
# TensoRF
python train.py --config config/tensorf/chest_50.yaml
You can use this repo to run NeAT. Remember to reprocess the data first.
To render a cool demo, we provide visualization code in the folder 3D_vis
cd 3D_vis
python 3D_vis_backpack.py
python 3D_vis_backpack_gif.py
We also provide code for data generation in the folder dataGenerator
. To give you a quick start, we provide a raw data (backpack) for your debugging. Please download the raw data from Google dive or Baidu disk and then put it into the folder dataGenerator/raw_data
. Run
cd dataGenerator
python data_vis_backpack.py
cd ..
python generateData_backpack.py
If this repo helps you, please consider citing our works:
@inproceedings{sax_nerf,
title={Structure-Aware Sparse-View X-ray 3D Reconstruction},
author={Yuanhao Cai and Jiahao Wang and Alan Yuille and Zongwei Zhou and Angtian Wang},
booktitle={CVPR},
year={2024}
}