As of 2024, the HCE model is written in Keras with a Theano backend. Additionally, custom backends were written in the Keras for Science and a fork of the keras-contrib repositories by Dr. Michael Oliver. Originally, the code was written in Python 2. To preserve this code as it is written, a Nvidia Docker container was created such that the code could be run on modern GPU systems.
PLEASE NOTE: This container has been tested up to the Pascal NVIDIA GPU architecture. Newer architectures may not be supported and result in the test script hanging indefinitely. See this link for reference regarding which GPUs have which architecture. After testing it does not seem to work with the Ampere architecture and above.
- Clone this repo.
- Install the Nvidia Container Toolkit.
- Download the nvidia/cuda base image
cuda9.2-cudnn7-devel-ubuntu16.04.tar.gz
. - Unpack the nvidia/cuda base image so that you can use it as a base from your
Dockerfile
.docker load < cuda9.2-cudnn7-devel-ubuntu16.04.tar.gz
- Modify the
Dockerfile
to point to your setup (necessary files, mounts, etc). - Build the Docker container. See Docker build documentation.
docker build --pull=false --rm -f "Dockerfile" -t hcev4:latest "."
- Enter the Docker container.
docker run -it --gpus all --user hce_user hcev4:latest bash
- Make sure that you have access to GPUs via the command
nvidia-smi
. This command should output information about the GPUs available on the machine. - Run the model using the
hce_gpu
conda environment. - Test that the model can be successfully constructed using the
code/test_model_functional.py
script viapython test_model_functional.py
.
- Make sure that you have access to GPUs via the command