Skip to content

Commit

Permalink
Merge pull request #10 from ins-amu/develop
Browse files Browse the repository at this point in the history
Develop
  • Loading branch information
Ziaeemehr authored Feb 24, 2025
2 parents e2ac138 + a4b2ef2 commit 2aa0879
Show file tree
Hide file tree
Showing 5 changed files with 64 additions and 16,589 deletions.
48 changes: 31 additions & 17 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -1,28 +1,42 @@
# Use the smallest official Python image
FROM python:3.9-slim
FROM nvidia/cuda:11.8.0-cudnn8-runtime-ubuntu20.04

# Set environment variables to avoid interactive prompts
# Set environment to avoid interactive prompts
ENV DEBIAN_FRONTEND=noninteractive

# Set the working directory inside the container
WORKDIR /app

RUN apt-get update && apt-get install -y --no-install-recommends \
swig \
# Install system dependencies
RUN apt-get update && apt-get install -y \
python3.10 \
python3-pip \
build-essential \
git \
gcc \
g++ \
libatlas-base-dev \
libopenblas-dev \
libhdf5-dev \
swig \
tzdata \
&& ln -s /usr/bin/python3 /usr/bin/python \
&& rm -rf /var/lib/apt/lists/*

# Set timezone (e.g., UTC) to avoid configuration prompts
RUN echo "Etc/UTC" > /etc/timezone && \
ln -fs /usr/share/zoneinfo/Etc/UTC /etc/localtime && \
dpkg-reconfigure -f noninteractive tzdata

# Copy only the package files (without installing dependencies)
COPY . .
WORKDIR /app

RUN pip install --upgrade pip

# Install Python dependencies
RUN pip install --no-cache-dir hatchling setuptools wheel && \
pip install .
RUN pip install --no-cache-dir \
hatchling \
setuptools>=45 \
wheel \
swig>=4.0

COPY . .

# Optional: Support GPU version (comment out if not needed)
ARG GPU_SUPPORT=false
RUN if [ "$GPU_SUPPORT" = "true" ]; then pip install cupy; fi
RUN pip install . --no-cache-dir
RUN pip install cupy-cuda11x

# Set the default command (modify as needed)
CMD ["python", "-c", "from vbi.utils import test_imports; test_imports()"]
28 changes: 27 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,33 @@
To use the Docker image, you can pull it from the GitHub Container Registry and run it as follows:

```bash
docker run --rm -it ghcr.io/ins-amu/vbi:main python3 -c 'from vbi.utils import test_imports; test_imports()'
docker build -t vbi-project . # build
docker run --gpus all -it vbi-project # use with gpu

# Output something like this:

Dependency Check

Package Version Status
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
vbi v0.1.3 ✅ Available
numpy 1.24.4 ✅ Available
scipy 1.10.1 ✅ Available
matplotlib 3.7.5 ✅ Available
sbi 0.22.0 ✅ Available
torch 2.4.1+cu121 ✅ Available
cupy 12.3.0 ✅ Available

Torch GPU available: True
Torch device count: 1
Torch CUDA version: 12.1
CuPy GPU available: True
CuPy device count: 1
CUDA Version: 11.8
Device Name: NVIDIA RTX A5000
Total Memory: 23.68 GB
Compute Capability: 8.6

```


Expand Down
4 changes: 4 additions & 0 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,10 @@ Virtual Brain Inference (VBI)
:width: 200px
:align: center


The **Virtual Brain Inference (VBI)** toolkit is an open-source, flexible solution tailored for probabilistic inference on virtual brain models. It integrates computational models with personalized anatomical data to deepen the understanding of brain dynamics and neurological processes. VBI supports **fast simulations**, comprehensive **feature extraction**, and employs **deep neural density estimators** to handle various neuroimaging data types. Its goal is to bridge the gap in solving the inverse problem of identifying control parameters that best explain observed data, thereby making these models applicable for clinical settings. VBI leverages high-performance computing through GPU acceleration and C++ code to ensure efficiency in processing.


Workflow
========

Expand Down
Loading

0 comments on commit 2aa0879

Please sign in to comment.