Code for the CVPR 2024 paper "WaveMo: Learning Wavefront Modulations to See Through Scattering".
Project Page | PDF | arXiv | YouTube
We test our code with Torch 2.3.1 and CUDA 12.4.1 on a NVIDIA RTX A6000 GPU. Please follow these steps to set up the environment:
conda create -n wavemo python=3.8.11
conda activate wavemo
pip install -r requirements.txt
We use Places365 as our training dataset. Please acquire it from http://places2.csail.mit.edu/download.html. You can also use other image datasets for training.
Please run the following command to learn wavefront modulation patterns.
python main.py --training_data_dir DATASET_DIRECTORY --batch_size 32
If you have a Weights & Biases account and want to log your training results onto it, please run
python main.py --training_data_dir DATASET_DIRECTORY --batch_size 32 --use_wandb
You need to first set up the optical system as described in Section 6 of our paper. Then you can load the learned modualtions onto the spatial light modulator to modulate the measurements. To reconstruct the target, we recommend to use the unsupervised iterative approach proposed by Feng et al., the code of which is available at https://github.com/Intelligent-Sensing/NeuWS.
For general questions of this project, please raise an issue here or contact Mingyang Xie at [email protected]. For questions related to setting up the optical system, please contact Haiyun Guo at [email protected].