With these docker images you can have an isolated working environment to work on or run the OpenPose project.
More info in this blog post.
Go to Build Image
For machines that are using NVIDIA graphics cards we need to have the nvidia-docker-plugin.
IMPORTANT: This repo supports nvidia-docker
version 2.x
!!!
Assuming the NVIDIA drivers and Docker® Engine are properly installed (see installation)
# If you have nvidia-docker 1.0 installed: we need to remove it and all existing GPU containers
docker volume ls -q -f driver=nvidia-docker | xargs -r -I{} -n1 docker ps -q -a -f volume={} | xargs -r docker rm -f
sudo apt-get purge -y nvidia-docker
# Add the package repositories
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | \
sudo apt-key add -
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \
sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update
# Install nvidia-docker2 and reload the Docker daemon configuration
sudo apt-get install -y nvidia-docker2
sudo pkill -SIGHUP dockerd
# Test nvidia-smi with the latest official CUDA image
docker run --runtime=nvidia --rm nvidia/cuda:9.0-base nvidia-smi
If you are not using the official docker-ce
package on CentOS/RHEL, use the next section.
# If you have nvidia-docker 1.0 installed: we need to remove it and all existing GPU containers
docker volume ls -q -f driver=nvidia-docker | xargs -r -I{} -n1 docker ps -q -a -f volume={} | xargs -r docker rm -f
sudo yum remove nvidia-docker
# Add the package repositories
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.repo | \
sudo tee /etc/yum.repos.d/nvidia-docker.repo
# Install nvidia-docker2 and reload the Docker daemon configuration
sudo yum install -y nvidia-docker2
sudo pkill -SIGHUP dockerd
# Test nvidia-smi with the latest official CUDA image
docker run --runtime=nvidia --rm nvidia/cuda:9.0-base nvidia-smi
If yum
reports a conflict on /etc/docker/daemon.json
with the docker
package, you need to use the next section instead.
For docker-ce
on ppc64le
, look at the FAQ.
# Install nvidia-docker and nvidia-docker-plugin
# If you have nvidia-docker 1.0 installed: we need to remove it and all existing GPU containers
docker volume ls -q -f driver=nvidia-docker | xargs -r -I{} -n1 docker ps -q -a -f volume={} | xargs -r docker rm -f
sudo rm /usr/bin/nvidia-docker /usr/bin/nvidia-docker-plugin
# Install nvidia-docker2 from AUR and reload the Docker daemon configuration
yaourt -S aur/nvidia-docker
sudo pkill -SIGHUP dockerd
# Test nvidia-smi with the latest official CUDA image
docker run --runtime=nvidia --rm nvidia/cuda:9.0-base nvidia-smi
If the nvidia-smi
test was successful you may proceed. Otherwise please visit the
official NVIDIA support.
You can either browse to directory of the version you want to install and issue
manually a docker build
command or just use the makefile:
# Prints Help
make
# E.g. Build image with CUDA 10 and cuDNN 7 support
make openpose_cuda10_cudnn7
Note: The build process takes a while.
Once the container has been built, you can issue the following command to run it:
docker run --rm -it --runtime=nvidia --privileged --net=host --ipc=host \
-v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY \
-v $HOME/.Xauthority:/root/.Xauthority -e XAUTHORITY=/root/.Xauthority \
-v <PATH_TO_YOUR_OPENPOSE_PROJECT>:/root/openpose \
turlucode/openpose:cuda10-cudnn7
A terminator window will pop-up and the rest you know it! :)
Important Remark: This will launch the container as root. This might have unwanted effects! If you want to run it as the current user, see next section.
You can also run the script as the current linux-user by passing the DOCKER_USER_*
variables like this:
docker run --rm -it --runtime=nvidia --privileged --net=host --ipc=host \
-v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY \
-v $HOME/.Xauthority:/home/$(id -un)/.Xauthority -e XAUTHORITY=/home/$(id -un)/.Xauthority \
-e DOCKER_USER_NAME=$(id -un) \
-e DOCKER_USER_ID=$(id -u) \
-e DOCKER_USER_GROUP_NAME=$(id -gn) \
-e DOCKER_USER_GROUP_ID=$(id -g) \
-v <PATH_TO_YOUR_OPENPOSE_PROJECT>:/root/openpose \
turlucode/openpose:cuda10-cudnn7
Important Remark:
- Please note that you need to pass the
Xauthority
to the correct user's home directory. - You may need to run
xhost si:localuser:$USER
or worst casexhost local:root
if get errors likeError: cannot open display
To mount your local openpose
you can just use the following docker feature:
# for root user
-v $HOME/<some_path>/openpose:/root/openpose
# for local user
-v $HOME/<some_path>/openpose:/home/$(id -un)/openpose
If you use localy PyCharm you can also expose it to your docker image and used inside the container.
# Mount PyCharm program folder
-v $HOME/<some_path>/pycharm-community:/home/$(id -un)/pycharm-community
# Mount PyCharm config folder
-v $HOME/<some_path>/pycharm_config:/home/$(id -un)/.PyCharmCE2018.2
Important Remark: Note that depending on the PyCharm version you are using, the mounting point .PyCharmCE2018.2
might be different. Verify it first in your local machine.
For both root and custom user use:
-v $HOME/.ssh:/root/.ssh
For the custom-user the container will make sure to copy them to the right location.
If you have a virtual device node like /dev/video0
, e.g. a compatible usb camera, you pass this to the docker container like this:
--device /dev/video0
Assuming you either mounted the OpenPose project or clonned it inside docker, a typical build sequence is:
cd <openpose_project_root_folder>
mkdir build && cd build
cmake -DCMAKE_BUILD_TYPE=Release -DBUILD_PYTHON=ON ..
make -j8 && sudo make install
For more details check the official instructions
The images on this repository are based on the following work:
- nvidia-opengl
- nvidia-cuda - Hierarchy is base->runtime->devel
- Please let us know by filing a new issue.
- You can contribute by opening a pull request.