This repository contains code for the paper Deep learning multi-shot 3D localization microscopy using hybrid optical-electronic computing by Hayato Ikoma, Takamasa Kudo, Evan Peng, Michael Broxton and Gordon Wetzstein.
Run
python render_psf.py
This will create result
directory and save a multi-stack tiff file.
Run create_environment.sh
to create a conda environment for this repo.
You may need to change the version of cudatoolkit depending on your environment.
The trained model and the captured dataset is available here. Expand the downloaded zip file in data
directory.
The zip file contains the trained phase mask and model for both fixed cells and live cells, and the captured PSF and raw data.
Our training framework is based on Pytorch-lightning. Run
python localizer_trainer.py --optimize_optics --gpus 1 --default_root_dir="data/logs"
The hyperparameters can be changed through flags. See localizer.py, module.microscope.py and pytorch_lightning.Trainer for available hyperparameters. Note that some available functionalities are not used in the paper.
You can change how many GPUs you want to use with --gpus
flag.
The Tensorboard log will be saved in --default_root_dir
.
Run the following to run the inference.
python infer.py \
--img_path data/captured_data/fixed_cell.tif \
--ckpt_path data/trained_model/fixed_cell.ckpt \
--batch_sz 10 --save_dir result --gpus 1
Please direct questions to [email protected].
We thank the open source software used in our project, which includes pytorch, pytorch-lightning, numpy, scipy, pytorch-debayer, matplotlib, and Fiji.