Skip to content

Commit

Permalink
Merge branch 'securefederatedai:develop' into exp_agg_workflow_privat…
Browse files Browse the repository at this point in the history
…e_attrs_fix
  • Loading branch information
refai06 authored Nov 4, 2024
2 parents 4c06768 + 4898103 commit 2d0a1c9
Show file tree
Hide file tree
Showing 26 changed files with 232 additions and 255 deletions.
8 changes: 2 additions & 6 deletions .github/workflows/dockerization.yml
Original file line number Diff line number Diff line change
Expand Up @@ -24,16 +24,12 @@ jobs:
python -m pip install --upgrade pip
pip install .
- name: Build base image
run: |
docker build -t openfl -f openfl-docker/Dockerfile.base .
- name: Create workspace image
run: |
fx workspace create --prefix example_workspace --template keras_cnn_mnist
cd example_workspace
fx plan initialize -a localhost
fx workspace dockerize --base_image openfl
fx workspace dockerize --save --revision https://github.com/${GITHUB_REPOSITORY}.git@${{ github.event.pull_request.head.sha }}
- name: Create certificate authority for workspace
run: |
Expand Down Expand Up @@ -76,7 +72,7 @@ jobs:
- name: Load workspace image
run: |
cd example_workspace
docker load -i example_workspace_image.tar
docker load -i example_workspace.tar
- name: Run aggregator and collaborator
run: |
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# This workflow will install Python dependencies, run tests and lint with a single version of Python
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions

name: Security Scans
name: Hadolint Security Scan

on:
pull_request:
Expand Down
20 changes: 17 additions & 3 deletions .github/workflows/trivy.yml
Original file line number Diff line number Diff line change
Expand Up @@ -33,11 +33,18 @@ jobs:

- name: Install Trivy
run: |
curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | sudo sh -s -- -b /usr/local/bin v0.55.0
curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | sudo sh -s -- -b /usr/local/bin
- name: Run Trivy code vulnerability scanner (JSON Output)
run: |
trivy --quiet fs --format json --output trivy-code-results.json --ignore-unfixed --vuln-type os,library --severity CRITICAL,HIGH,MEDIUM,LOW .
trivy --quiet fs \
--format json \
--output trivy-code-results.json \
--ignore-unfixed \
--vuln-type os,library \
--severity CRITICAL,HIGH,MEDIUM,LOW \
--db-repository 'ghcr.io/aquasecurity/trivy-db,public.ecr.aws/aquasecurity/trivy-db' \
.
- name: Upload Code Vulnerability Scan Results
uses: actions/upload-artifact@v3
Expand All @@ -64,7 +71,14 @@ jobs:

- name: Run Trivy code vulnerability scanner (SPDX-JSON Output)
run: |
trivy --quiet fs --format spdx-json --output trivy-code-spdx-results.json --ignore-unfixed --vuln-type os,library --severity CRITICAL,HIGH,MEDIUM,LOW .
trivy --quiet fs \
--format spdx-json \
--output trivy-code-spdx-results.json \
--ignore-unfixed \
--vuln-type os,library \
--severity CRITICAL,HIGH,MEDIUM,LOW \
--db-repository 'ghcr.io/aquasecurity/trivy-db,public.ecr.aws/aquasecurity/trivy-db' \
.
- name: Upload Code Vulnerability Scan Results
uses: actions/upload-artifact@v3
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/workflow_interface_101_mnist.yml
Original file line number Diff line number Diff line change
Expand Up @@ -34,11 +34,11 @@ jobs:

- name: Run Notebook
run: |
jupyter nbconvert --execute --to notebook ./openfl-tutorials/experimental/Workflow_Interface_101_MNIST.ipynb
jupyter nbconvert --execute --to notebook ./openfl-tutorials/experimental/101_MNIST.ipynb
echo "Notebook run completed"
- name: Tar files
run: tar -cvf notebook.tar ./openfl-tutorials/experimental/Workflow_Interface_101_MNIST.nbconvert.ipynb
run: tar -cvf notebook.tar ./openfl-tutorials/experimental/101_MNIST.nbconvert.ipynb

- name: Upload Artifacts
uses: actions/upload-artifact@v4
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ Setup long-lived components to run many experiments in series. Recommended for F
Define an experiment and distribute it manually. All participants can verify model code and [FL plan](https://openfl.readthedocs.io/en/latest/about/features_index/taskrunner.html#federated-learning-plan-fl-plan-settings) prior to execution. The federation is terminated when the experiment is finished

- [Workflow Interface](https://openfl.readthedocs.io/en/latest/about/features_index/workflowinterface.html) ([*experimental*](https://openfl.readthedocs.io/en/latest/developer_guide/experimental_features.html)):
Create complex experiments that extend beyond traditional horizontal federated learning. See the [experimental tutorials](https://github.com/securefederatedai/openfl/blob/develop/openfl-tutorials/experimental/) to learn how to coordinate [aggregator validation after collaborator model training](https://github.com/securefederatedai/openfl/tree/develop/openfl-tutorials/experimental/Workflow_Interface_102_Aggregator_Validation.ipynb), [perform global differentially private federated learning](https://github.com/psfoley/openfl/tree/experimental-workflow-interface/openfl-tutorials/experimental/Global_DP), measure the amount of private information embedded in a model after collaborator training with [privacy meter](https://github.com/securefederatedai/openfl/blob/develop/openfl-tutorials/experimental/Privacy_Meter/readme.md), or [add a watermark to a federated model](https://github.com/securefederatedai/openfl/blob/develop/openfl-tutorials/experimental/Workflow_Interface_301_MNIST_Watermarking.ipynb).
Create complex experiments that extend beyond traditional horizontal federated learning. See the [experimental tutorials](https://github.com/securefederatedai/openfl/blob/develop/openfl-tutorials/experimental/) to learn how to coordinate [aggregator validation after collaborator model training](https://github.com/securefederatedai/openfl/tree/develop/openfl-tutorials/experimental/102_Aggregator_Validation.ipynb), [perform global differentially private federated learning](https://github.com/psfoley/openfl/tree/experimental-workflow-interface/openfl-tutorials/experimental/Global_DP), measure the amount of private information embedded in a model after collaborator training with [privacy meter](https://github.com/securefederatedai/openfl/blob/develop/openfl-tutorials/experimental/Privacy_Meter/readme.md), or [add a watermark to a federated model](https://github.com/securefederatedai/openfl/blob/develop/openfl-tutorials/experimental/301_MNIST_Watermarking.ipynb).

The quickest way to test OpenFL is to follow our [tutorials](https://github.com/securefederatedai/openfl/tree/develop/openfl-tutorials). </br>
Read the [blog post](https://towardsdatascience.com/go-federated-with-openfl-8bc145a5ead1) explaining steps to train a model with OpenFL. </br>
Expand Down
8 changes: 4 additions & 4 deletions docs/about/releases.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,10 +8,10 @@
- [**Horovod**](https://github.com/securefederatedai/openfl/tree/develop/openfl-workspace/torch_llm_horovod): Use horovod to efficiently train LLMs across multiple private clusters
- **Neuralchat-7b fine-tuning**: Learn how to fine-tune [neuralchat-7b](https://github.com/securefederatedai/openfl/tree/develop/openfl-tutorials/experimental/LLM/neuralchat) using the Intel® Extension for Transformers and the workflow interface.

- **Workflow API enhancements**: Introducing an experimental [Workspace Export](https://github.com/securefederatedai/openfl/blob/develop/openfl-tutorials/experimental/Workflow_Interface_1001_Workspace_Creation_from_JupyterNotebook.ipynb) feature that can be used to transform a Workflow API-based FL experiment into the TaskRunner API format for running in a distributed deployment. There is also groundwork laid for a future FederatedRuntime implementation for Workflow API, in addition to the currently supported LocalRuntime.
- **Workflow API enhancements**: Introducing an experimental [Workspace Export](https://github.com/securefederatedai/openfl/blob/develop/openfl-tutorials/experimental/1001_Workspace_Creation_from_JupyterNotebook.ipynb) feature that can be used to transform a Workflow API-based FL experiment into the TaskRunner API format for running in a distributed deployment. There is also groundwork laid for a future FederatedRuntime implementation for Workflow API, in addition to the currently supported LocalRuntime.
- **Federated Evaluation**: Federated evaluation allows for the assessment of ML models in a federated learning system by validating the model's performance locally on decentralized collaborator nodes, and then aggregating these metrics to gauge overall effectiveness, without compromising data privacy and security. FE is now officially supported by OpenFL, including [example tutorials](https://openfl.readthedocs.io/en/latest/about/features_index/fed_eval.html) on how to use this new feature (via TaskRunner API).

- **Expanded AI Accelerator Support**: Intel® Data Center GPU Max Series support via the Intel® Extension for PyTorch, including examples for training on datasets such as [MNIST](https://github.com/securefederatedai/openfl/blob/develop/openfl-tutorials/experimental/Workflow_Interface_104_MNIST_XPU.ipynb) (via Workflow API) and [TinyImageNet](https://github.com/securefederatedai/openfl/tree/develop/openfl-tutorials/interactive_api/PyTorch_TinyImageNet_XPU) (via Interactive API)
- **Expanded AI Accelerator Support**: Intel® Data Center GPU Max Series support via the Intel® Extension for PyTorch, including examples for training on datasets such as [MNIST](https://github.com/securefederatedai/openfl/blob/develop/openfl-tutorials/experimental/104_MNIST_XPU.ipynb) (via Workflow API) and [TinyImageNet](https://github.com/securefederatedai/openfl/tree/develop/openfl-tutorials/interactive_api/PyTorch_TinyImageNet_XPU) (via Interactive API)

- **Improved straggler collaborator handling**: Improvements and bug fixes to aggregator’s fault-tolerance when collaborators stop responding or drop out of a federation. Introducing a cut-off timer-based policy and enabling other policies to be plugged-in. This capability is particularly relevant for large or geo-distributed federations.

Expand Down Expand Up @@ -49,10 +49,10 @@ We are excited to announce the release of OpenFL 1.5.1 - our first since moving
### 1.5 Highlights
* **New Workflows Interface (Experimental)** - a new way of composing federated learning experiments inspired by [Metaflow](https://github.com/Netflix/metaflow). Enables the creation of custom aggregator and collaborators tasks. This initial release is intended for simulation on a single node (using the LocalRuntime); distributed execution (FederatedRuntime) to be enabled in a future release.
* **New use cases enabled by the workflow interface**:
* **[End-of-round validation with aggregator dataset](https://github.com/intel/openfl/blob/develop/openfl-tutorials/experimental/Workflow_Interface_102_Aggregator_Validation.ipynb)**
* **[End-of-round validation with aggregator dataset](https://github.com/intel/openfl/blob/develop/openfl-tutorials/experimental/102_Aggregator_Validation.ipynb)**
* **[Privacy Meter](https://github.com/intel/openfl/tree/develop/openfl-tutorials/experimental/Privacy_Meter)** - Privacy meter, based on state-of-the-art membership inference attacks, provides a tool to quantitatively audit data privacy in statistical and machine learning algorithms. The objective of a membership inference attack is to determine whether a given data record was in the training dataset of the target model. Measures of success (accuracy, area under the ROC curve, true positive rate at a given false positive rate ...) for particular membership inference attacks against a target model are used to estimate privacy loss for that model (how much information a target model leaks about its training data). Since stronger attacks may be possible, these measures serve as lower bounds of the actual privacy loss. The Privacy Meter workflow example generates privacy loss reports for all party's local model updates as well as the global models throughout all rounds of the FL training.
* **[Vertical Federated Learning Examples](https://github.com/intel/openfl/tree/develop/openfl-tutorials/experimental/Vertical_FL)**
* **[Federated Model Watermarking](https://github.com/intel/openfl/blob/develop/openfl-tutorials/experimental/Workflow_Interface_301_MNIST_Watermarking.ipynb)** using the [WAFFLE](https://arxiv.org/pdf/2008.07298.pdf) method
* **[Federated Model Watermarking](https://github.com/intel/openfl/blob/develop/openfl-tutorials/experimental/301_MNIST_Watermarking.ipynb)** using the [WAFFLE](https://arxiv.org/pdf/2008.07298.pdf) method
* **[Differential Privacy](https://github.com/intel/openfl/tree/develop/openfl-tutorials/experimental/Global_DP)** – Global differentially private federated learning using Opacus library to achieve a differentially private result w.r.t the inclusion or exclusion of any collaborator in the training process. At each round, a subset of collaborators are selected using a Poisson distribution over all collaborators, the selected collaborators perform local training with periodic clipping of their model delta (with respect to the current global model) to bound their contribution to the average of local model updates. Gaussian noise is then added to the average of these local models at the aggregator. This example is implemented in two different but statistically equivalent ways – the lower level API utilizes RDPAccountant and DPDataloader Opacus objects to perform privacy accounting and collaborator selection respectively, whereas the higher level API uses PrivacyEngine Opacus object for collaborator selection and internally utilizes RDPAccountant for privacy accounting.
* **[Habana Accelerator Support](https://github.com/intel/openfl/tree/develop/openfl-tutorials/interactive_api/HPU/PyTorch_TinyImageNet)**
* **Official support for Python 3.9 and 3.10**
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ This tutorial introduces the API to get up and running with your first horizonta

- Aims for syntatic consistency with the Netflix MetaFlow project. Infrastructure reuse where possible.

See `full notebook <https://github.com/securefederatedai/openfl/blob/f1657abe88632d542504d6d71ca961de9333913f/openfl-tutorials/experimental/Workflow_Interface_101_MNIST.ipynb>`_.
See `full notebook <https://github.com/securefederatedai/openfl/blob/f1657abe88632d542504d6d71ca961de9333913f/openfl-tutorials/experimental/101_MNIST.ipynb>`_.

**What is it?**
The workflow interface is a new way of composing federated learning experiments with |productName|.
Expand Down
23 changes: 11 additions & 12 deletions openfl-docker/Dockerfile.base
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,9 @@
# SPDX-License-Identifier: Apache-2.0
# ------------------------------------
# OpenFL Base Image
# $> docker build . -t openfl -f Dockerfile.base [--build-arg OPENFL_REVISION=GIT_URL@COMMIT_ID]
# ------------------------------------
FROM ubuntu:22.04 as base
FROM ubuntu:22.04 AS base

# Configure network proxy, if required, in ~/.docker/config.json
ENV DEBIAN_FRONTEND=noninteractive
Expand All @@ -13,28 +14,26 @@ SHELL ["/bin/bash", "-o", "pipefail", "-c"]
RUN --mount=type=cache,id=apt-dev,target=/var/cache/apt \
apt-get update && \
apt-get install -y \
git \
python3-pip \
python3.10-dev \
ca-certificates \
build-essential \
git \
--no-install-recommends && \
apt-get purge -y linux-libc-dev && \
rm -rf /var/lib/apt/lists/*
RUN apt-get purge -y linux-libc-dev

# Create an unprivileged user.
RUN groupadd -g 1001 default && \
useradd -m -u 1001 -g default openfl
USER openfl
useradd -m -u 1001 -g default user
USER user
WORKDIR /home/user
ENV PATH=/home/user/.local/bin:$PATH

# Install OpenFL.
WORKDIR /home/openfl
COPY --chown=openfl:default . .
ENV PATH=/home/openfl/.local/bin:$PATH
ARG OPENFL_REVISION=https://github.com/securefederatedai/[email protected]
RUN pip install --no-cache-dir -U pip setuptools wheel && \
pip install --no-cache-dir -e .

# Download thirdparty licenses.
RUN INSTALL_SOURCES=yes /home/openfl/openfl-docker/licenses.sh
pip install --no-cache-dir git+${OPENFL_REVISION} && \
INSTALL_SOURCES=yes /home/user/.local/lib/python3.10/site-packages/openfl-docker/licenses.sh

CMD ["/bin/bash"]
8 changes: 3 additions & 5 deletions openfl-docker/Dockerfile.workspace
Original file line number Diff line number Diff line change
Expand Up @@ -8,13 +8,11 @@ FROM ${BASE_IMAGE}

SHELL ["/bin/bash", "-o", "pipefail", "-c"]

USER openfl
USER user
ARG WORKSPACE_NAME
COPY ${WORKSPACE_NAME}.zip .

WORKDIR /home/openfl
RUN fx workspace import --archive ${WORKSPACE_NAME}.zip && \
pip install -r ${WORKSPACE_NAME}/requirements.txt
pip install --no-cache-dir -r ${WORKSPACE_NAME}/requirements.txt

WORKDIR /home/openfl/${WORKSPACE_NAME}
WORKDIR /home/user/${WORKSPACE_NAME}
CMD ["/bin/bash"]
Original file line number Diff line number Diff line change
Expand Up @@ -962,7 +962,7 @@
"from openfl.experimental.workspace_export import WorkspaceExport\n",
"\n",
"WorkspaceExport.export(\n",
" notebook_path='./Workflow_Interface_1001_Workspace_Creation_from_JupyterNotebook.ipynb',\n",
" notebook_path='./1001_Workspace_Creation_from_JupyterNotebook.ipynb',\n",
" output_workspace=f\"/home/{os.environ['USER']}/generated-workspace\"\n",
")"
]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
"metadata": {},
"source": [
"# Workflow Interface 101: Quickstart\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/intel/openfl/blob/develop/openfl-tutorials/experimental/Workflow_Interface_101_MNIST.ipynb)"
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/intel/openfl/blob/develop/openfl-tutorials/experimental/101_MNIST.ipynb)"
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
"id": "bd059520",
"metadata": {},
"source": [
"In this tutorial, we build on the ideas from the [first](https://github.com/securefederatedai/openfl/blob/develop/openfl-tutorials/experimental/Workflow_Interface_101_MNIST.ipynb) quick start notebook, and demonstrate how to perform validation on the aggregator after training."
"In this tutorial, we build on the ideas from the [first](https://github.com/securefederatedai/openfl/blob/develop/openfl-tutorials/experimental/101_MNIST.ipynb) quick start notebook, and demonstrate how to perform validation on the aggregator after training."
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
"metadata": {},
"source": [
"# Workflow Interface 104: Working with Keras\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/intel/openfl/blob/develop/openfl-tutorials/experimental/Workflow_Interface_104_Keras_MNIST_with_GPU.ipynb)"
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/intel/openfl/blob/develop/openfl-tutorials/experimental/104_Keras_MNIST_with_GPU.ipynb)"
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
"metadata": {},
"source": [
"# Workflow Interface 104: MNIST XPU\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/intel/openfl/blob/develop/openfl-tutorials/experimental/Workflow_Interface_104_MNIST_XPU.ipynb)"
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/intel/openfl/blob/develop/openfl-tutorials/experimental/104_MNIST_XPU.ipynb)"
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
"metadata": {},
"source": [
"# Workflow Interface 201: Using Ray to request exclusive GPUs\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/intel/openfl/blob/develop/openfl-tutorials/experimental/Workflow_Interface_201_Exclusive_GPUs_with_Ray.ipynb)"
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/intel/openfl/blob/develop/openfl-tutorials/experimental/201_Exclusive_GPUs_with_Ray.ipynb)"
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
"source": [
"# Workflow Interface 301: Watermarking\n",
"\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/intel/openfl/blob/develop/openfl-tutorials/experimental/Workflow_Interface_301_MNIST_Watermarking.ipynb)"
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/intel/openfl/blob/develop/openfl-tutorials/experimental/301_MNIST_Watermarking.ipynb)"
]
},
{
Expand Down
Loading

0 comments on commit 2d0a1c9

Please sign in to comment.