HyP3 plugin for back-projection processing
Warning
Running the workflows in this repository requires a compiled version of the Stanford Radar Group Processor. For this reason, running this repository's workflows in a standard Python is not implemented yet. Instead, we recommend running the workflows from the docker container as outlined below.
The HyP3-back-projection plugin provides a set of workflows (currently only accessible via the docker container) that can be used to process SAR data using the Stanford Radar Group Processor. The workflows currently included in this plugin are:
back_projection
: A workflow for creating geocoded Sentinel-1 SLCs from Level-0 data using the back-projection methodology.
To run a workflow, you'll first need to build the docker container:
docker build -t back-projection:latest .
Then, run the docker container for your chosen workflow.
docker run -it --rm \
-e EARTHDATA_USERNAME=[YOUR_USERNAME_HERE] \
-e EARTHDATA_PASSWORD=[YOUR_PASSWORD_HERE] \
-e ESA_USERNAME=[YOUR_USERNAME_HERE] \
-e ESA_PASSWORD=[YOUR_PASSWORD_HERE] \
back-projection:latest \
++process [WORKFLOW_NAME] \
[WORKFLOW_ARGS]
Here is an example command for the back_projection
workflow:
docker run -it --rm \
-e EARTHDATA_USERNAME=[YOUR_USERNAME_HERE] \
-e EARTHDATA_PASSWORD=[YOUR_PASSWORD_HERE] \
-e ESA_USERNAME=[YOUR_USERNAME_HERE] \
-e ESA_PASSWORD=[YOUR_PASSWORD_HERE] \
back-projection:latest \
++process back_projection \
S1A_IW_RAW__0SDV_20231229T134339_20231229T134411_051870_064437_4F42-RAW \
S1A_IW_RAW__0SDV_20231229T134404_20231229T134436_051870_064437_5F38-RAW
For all workflows, the user must provide their Earthdata Login credentials and ESA Copernicus Data Space Ecosystem (CDSE) credentials in order to download input data.
If you do not already have an Earthdata account, you can sign up here.
If you do not already have a CDSE account, you can sign up here.
Your credentials can be passed to the workflows via command-line options (--esa-username
and --esa-password
), environment variables
(EARTHDATA_USERNAME
, EARTHDATA_PASSWORD
, ESA_USERNAME
, and ESA_PASSWORD
), or via your .netrc
file.
If you haven't set up a .netrc
file
before, check out this guide to get started.
In order for Docker to be able to use the host's GPU, the host must have the NVIDIA Container Toolkit installed and configured. The process is different for different OS's and Linux distros. The setup process for the most common distros, including Ubuntu, can be found here. Make sure to follow the Docker configuration steps after installing the package.
Caution
Running the docker container on an Amazon Linux 2023 Deep Learning AMI runs, but will result in all zero outputs. Work is ongoing to determine what is causing this issue. For now, we recommend using option 2.i.
When running on an EC2 instance, the following setup is recommended:
- Create a G6-family EC2 instance that has at least 32 GB of memory.
- Launch your instance with one of the following setups (option i is recommended):
- Use the latest Amazon Linux 2023 AMI with
scripts/amazon_linux_setup.sh
as the user script on launch. - Use the latest Ubuntu AMI with the
scripts/ubuntu_setup.sh
as the user script on launch. - Use the Ubuntu Deep Learning Base OSS Nvidia Driver GPU AMI (no install script required).
- Use the latest Amazon Linux 2023 AMI with
- Build the GPU docker container with the correct compute capability version. To determine this value, run
nvidia-smi
on the instance to obtain GPU type, then cross-reference this information with NVIDIA's GPU type compute capability list. For a g6.2xlarge instance, this would be:
docker --build-arg="GPU_ARCH=89" -t back-projection:gpu-89 -f Dockerfile.gpu .
The compute capability version will always be the same for a given instance type, so you will only need to look this up once per instance type.
The default value for this argument is 89
- the correct value for g6.2xlarge instances.