Code for the experiments from the paper FMARS: Annotating Remote Sensing Images for Disaster Management using Foundation Models.
Qualitative results obtained over two example areas, namely USA (top) and Gambia (bottom). from left to right: RGB image, DAFormer, MIC, and FMARS ground truth. Best viewed zoomed in.
Note
The Dataset is available as parquet files, to be rasterized, or can be generated with the package igarss-fmars-gen.
First, please install cuda version 11.0.3 available at https://developer.nvidia.com/cuda-11-0-3-download-archive. It is required to build mmcv-full later.
For this project, we used python 3.8.5. We recommend setting up a new virtual environment:
pyenv install 3.8.17
pyenv global 3.8.17
python -m venv .venv
source .venv/bin/activate
In that environment, the requirements can be installed with:
pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 -f https://download.pytorch.org/whl/torch_stable.html
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.9.0/index.html
pip install -e .
Please, download the MiT-B5 ImageNet weights provided by SegFormer
from their OneDrive and put them in the folder pretrained/
.
Download necessary metadata using:
wget -q https://github.com/links-ads/igarss-fmars-gen/releases/download/v0.1.0/metadata.zip
unzip metadata.zip
rm metadata.zip
Put the metadata in the folder metadata/
.
The Dataset is available as parquet files, to be rasterized, or can be generated with the package igarss-fmars-gen Both the original Maxar Open Data dataset and the annotations genereted with FMARS are needed. An example of data folder structure can be:
data/
├── maxar-open-data/
| ├── afghanistan-earthquake22/
| ├── BayofBengal-Cyclone-Mocha-May-23/
| └── ...
└── annotations/
├── train/
| ├── afghanistan-earthquake22/
| ├── BayofBengal-Cyclone-Mocha-May-23/
| └── ...
├── val/
| ├── afghanistan-earthquake22/
| ├── BayofBengal-Cyclone-Mocha-May-23/
| └── ...
└── test/
├── afghanistan-earthquake22/
├── BayofBengal-Cyclone-Mocha-May-23/
└── ...
It's necessary for the images and the annotations to be in the correct events subfolders. The paths need to be specified in the configuration files under configs/_base_/datasets/
in the fields img_dirs
and ann_dirs
.
To run the experiments reported in the paper, the following commands are used to train and test the three models (SegFormer, DAFormer, MIC).
python -B -O tools/train.py configs/fmars/segformer.py
python -B -O tools/test.py configs/fmars/segformer.py work_dirs/segformer/iter_30000.pth --eval mIoU --show-dir output_images/segformer
python -B -O tools/train.py configs/fmars/daformer_sampler.py
python -B -O tools/test.py configs/fmars/daformer_sampler.py work_dirs/daformer_sampler/iter_30000.pth --eval mIoU --show-dir output_images/daformer_sampler
python -B -O tools/train.py configs/fmars/mic_sampler.py
python -B -O tools/test.py configs/fmars/mic_sampler.py work_dirs/mic_sampler/iter_30000.pth --eval mIoU --show-dir output_images/mic_sampler
Additionally, in the configs/
folder, alternative experiments are available, including the versions of the models trained ignoring the entropy based sampling.
FMARS is based on the following open-source projects. We thank their authors for making the source code publicly available.