Skip to content

Latest commit

 

History

History
27 lines (24 loc) · 2.01 KB

README.md

File metadata and controls

27 lines (24 loc) · 2.01 KB

Hierarchical Convolutional Energy Model (HCE) of V4

As of 2024, the HCE model is written in Keras with a Theano backend. Additionally, custom backends were written in the Keras for Science and a fork of the keras-contrib repositories by Dr. Michael Oliver. Originally, the code was written in Python 2. To preserve this code as it is written, a Nvidia Docker container was created such that the code could be run on modern GPU systems.

Getting Started (Keras, Python2 version):

PLEASE NOTE: This container has been tested up to the Pascal NVIDIA GPU architecture. Newer architectures may not be supported and result in the test script hanging indefinitely. See this link for reference regarding which GPUs have which architecture. After testing it does not seem to work with the Ampere architecture and above.

  • Clone this repo.
  • Install the Nvidia Container Toolkit.
  • Download the nvidia/cuda base image cuda9.2-cudnn7-devel-ubuntu16.04.tar.gz.
  • Unpack the nvidia/cuda base image so that you can use it as a base from your Dockerfile.
    docker load < cuda9.2-cudnn7-devel-ubuntu16.04.tar.gz
    
  • Modify the Dockerfile to point to your setup (necessary files, mounts, etc).
  • Build the Docker container. See Docker build documentation.
    docker build --pull=false --rm -f "Dockerfile" -t hcev4:latest "."
    
  • Enter the Docker container.
    docker run -it --gpus all --user hce_user hcev4:latest bash
    
    • Make sure that you have access to GPUs via the command nvidia-smi. This command should output information about the GPUs available on the machine.
    • Run the model using the hce_gpu conda environment.
    • Test that the model can be successfully constructed using the code/test_model_functional.py script via python test_model_functional.py.