This project aims to provide a model for predicting depth from aerial images with (potentially noisy) sparse depth. This repository contains code to generate datasets from arbitrary paired satellite image and DSM datasets, to train several depth completion models, and to evalute this models. In addition, we've included non-machine learning baselines for comparison.
This repository is organized to be as modular as possible, so that you can easily add additional models, change training procedures, or change hyperparameters without radically altering existing code.
The main folders are:
- data: This folder contains the code for generating data and for loading data while training.
- models: This folder contains the baseline, non-ML models as well as the architecture for our neural networks.
- trainers: This folder contains the basic training procedure, base_trainer.py, as well as subclasses for particular models and loss functions.
- eval: This folder contains the evaluation code for each of the neural network models.
- configs: This folder contains .yml files that outline the hyperparameter settings for training code.
These modules are directed by main.py, the entry point for training our neural networks.
This project has sucessfully worked with data from ISPRS challenges and the Spacenet Urban 3D challenge.
Run the following code to intall all necessary pre-requisites.
pip install -r requirements.txt
The general format for training the included models is to run python main.py --config=<config file name>
.
Example config files can be found in the configs directory.
Each model has its own testing code in the eval directory. These are python scripts that include the saved model folder and data file as variables in the first few lines.
In the configs files, there is the option save_output
. When this is set to true, the first image of every 500th batch will be saved during training in order to monitor training progress and diagnose any problems quickly.