Skip to content

Commit

Permalink
interface > a-pi within notebooks and for requirements txt fileile
Browse files Browse the repository at this point in the history
Signed-off-by: kta-intel <[email protected]>
  • Loading branch information
kta-intel committed May 10, 2024
1 parent e0f6078 commit 30c5cd3
Show file tree
Hide file tree
Showing 19 changed files with 2,770 additions and 100 deletions.
10 changes: 5 additions & 5 deletions openfl-tutorials/experimental/Global_DP/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,14 +7,14 @@ Global DP implementation uses the [Opacus library](https://opacus.ai/) to perfor

Prerequisites:

`pip install -r ../requirements_workflow_interface.txt`
`pip install -r ../requirements_workflow_api.txt`
`pip install -r requirements_global_dp.txt`

1. `Workflow_Interface_Mnist_Implementation_1.py` uses lower level RDPAccountant and DPDataLoader Opacus objects to perform the privacy accounting and collaborator selection respectively. Local model aggregation and noising is implemented independent of Opacus, and final accounting is calculated by the RDPAccountant, using information about how many rounds of training was performed. To run with this version:
1. `Workflow_API_Mnist_Implementation_1.py` uses lower level RDPAccountant and DPDataLoader Opacus objects to perform the privacy accounting and collaborator selection respectively. Local model aggregation and noising is implemented independent of Opacus, and final accounting is calculated by the RDPAccountant, using information about how many rounds of training was performed. To run with this version:

`python Workflow_Interface_Mnist_Implementation_1.py --config_path test_config.yml`
`python Workflow_API_Mnist_Implementation_1.py --config_path test_config.yml`

2. `Workflow_Interface_Mnist_Implementation_2.py` uses the higher level PrivacyEngine Opacus object to wrap (using the 'make_private' method) a global data loader (serving up collaborator names), and a global optimizer (performing the local model aggregation and noising), with RDP accounting being performed internally by PrivacyEngine utilizing the fact that it tracks the usage of these wrapped objects over the course of training. To run with this version:
2. `Workflow_API_Mnist_Implementation_2.py` uses the higher level PrivacyEngine Opacus object to wrap (using the 'make_private' method) a global data loader (serving up collaborator names), and a global optimizer (performing the local model aggregation and noising), with RDP accounting being performed internally by PrivacyEngine utilizing the fact that it tracks the usage of these wrapped objects over the course of training. To run with this version:

`python Workflow_Interface_Mnist_Implementation_2.py --config_path test_config.yml`
`python Workflow_API_Mnist_Implementation_2.py --config_path test_config.yml`

Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,9 @@
"id": "14821d97",
"metadata": {},
"source": [
"# Workflow Interface\n",
"# Workflow API\n",
"## Fine-tuning neural-chat-7b-v3 using Intel(R) Extension for Transformers and OpenFL\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/intel/openfl/blob/develop/openfl-tutorials/experimental/LLM/neuralchat/Workflow_Interface_NeuralChat.ipynb)"
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/intel/openfl/blob/develop/openfl-tutorials/experimental/LLM/neuralchat/Workflow_API_NeuralChat.ipynb)"
]
},
{
Expand All @@ -33,7 +33,7 @@
"id": "a7989e72",
"metadata": {},
"source": [
"The workflow interface is a new way of composing federated learning expermients with OpenFL. It was borne through conversations with researchers and existing users who had novel use cases that didn't quite fit the standard horizontal federated learning paradigm. "
"The workflow API is a new way of composing federated learning expermients with OpenFL. It was borne through conversations with researchers and existing users who had novel use cases that didn't quite fit the standard horizontal federated learning paradigm. "
]
},
{
Expand All @@ -56,7 +56,7 @@
"metadata": {},
"source": [
"### Installing OpenFL\n",
"- Lets now install OpenFL and the necessary dependencies for the workflow interface by running the cell below:"
"- Lets now install OpenFL and the necessary dependencies for the workflow API by running the cell below:"
]
},
{
Expand All @@ -67,7 +67,7 @@
"outputs": [],
"source": [
"!pip install git+https://github.com/intel/openfl.git\n",
"!pip install -r ../../requirements_workflow_interface.txt\n",
"!pip install -r ../../requirements_workflow_api.txt\n",
"!pip install numpy --upgrade"
]
},
Expand Down Expand Up @@ -432,7 +432,7 @@
"scrolled": true
},
"source": [
"Now we come to the flow definition. The OpenFL Workflow Interface adopts the conventions set by Metaflow, that every workflow begins with `start` and concludes with the `end` task. The aggregator begins with an optionally passed in model and optimizer. The aggregator begins the flow with the `start` task, where the list of collaborators is extracted from the runtime (`self.collaborators = self.runtime.collaborators`) and is then used as the list of participants to run the task listed in `self.next`, `aggregated_model_validation`. The model, optimizer, and anything that is not explicitly excluded from the next function will be passed from the `start` function on the aggregator to the `aggregated_model_validation` task on the collaborator. Where the tasks run is determined by the placement decorator that precedes each task definition (`@aggregator` or `@collaborator`). Once each of the collaborators (defined in the runtime) complete the `aggregated_model_validation` task, they pass their current state onto the `train` task, from `train` to `local_model_validation`, and then finally to `join` at the aggregator. It is in `join` that an average is taken of the model weights, and the next round can begin.\n",
"Now we come to the flow definition. The OpenFL Workflow API adopts the conventions set by Metaflow, that every workflow begins with `start` and concludes with the `end` task. The aggregator begins with an optionally passed in model and optimizer. The aggregator begins the flow with the `start` task, where the list of collaborators is extracted from the runtime (`self.collaborators = self.runtime.collaborators`) and is then used as the list of participants to run the task listed in `self.next`, `aggregated_model_validation`. The model, optimizer, and anything that is not explicitly excluded from the next function will be passed from the `start` function on the aggregator to the `aggregated_model_validation` task on the collaborator. Where the tasks run is determined by the placement decorator that precedes each task definition (`@aggregator` or `@collaborator`). Once each of the collaborators (defined in the runtime) complete the `aggregated_model_validation` task, they pass their current state onto the `train` task, from `train` to `local_model_validation`, and then finally to `join` at the aggregator. It is in `join` that an average is taken of the model weights, and the next round can begin.\n",
"\n",
"![image.png](attachment:image.png)"
]
Expand Down

Large diffs are not rendered by default.

1,400 changes: 1,399 additions & 1 deletion openfl-tutorials/experimental/PyG_support/openFL_PyG_on_HIV_model.ipynb

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
"outputs": [],
"source": [
"!pip install git+https://github.com/intel/openfl.git\n",
"!pip install -r ../requirements_workflow_interface.txt\n",
"!pip install -r ../requirements_workflow_api.txt\n",
"!pip install torch\n",
"!pip install torchvision"
]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
"id": "0d7f65ec-d3d8-4c91-99a4-277c160cb33b",
"metadata": {},
"source": [
"# Workflow Interface VFL Two Party: Workspace Creation from Jupyter Notebook"
"# Workflow API VFL Two Party: Workspace Creation from Jupyter Notebook"
]
},
{
Expand All @@ -15,7 +15,7 @@
"source": [
"This tutorial demonstrates the methodology to convert a Federated Learning experiment developed in Jupyter Notebook into a Workspace that can be deployed using Task Runner API\n",
"\n",
"OpenFL experimental Workflow Interface enables the user to simulate a Federated Learning experiment using **LocalRuntime**. Once the simulation is ready, the methodology described in this tutorial enables the user to convert this experiment into an OpenFL workspace that can be deployed using the Task Runner API\n",
"OpenFL experimental Workflow API enables the user to simulate a Federated Learning experiment using **LocalRuntime**. Once the simulation is ready, the methodology described in this tutorial enables the user to convert this experiment into an OpenFL workspace that can be deployed using the Task Runner API\n",
"\n",
"##### High Level Overview of Methodology\n",
"1. User annotates the relevant cells of the Jupyter notebook with `#| export` directive\n",
Expand All @@ -24,7 +24,7 @@
"4. User can utilize the experimental `fx` commands to deploy and run the federation seamlessly\n",
"\n",
"\n",
"The methodology is described using an existing [OpenFL Two Party VFL Tutorial](https://github.com/securefederatedai/openfl/blob/develop/openfl-tutorials/experimental/Vertical_FL/Workflow_Interface_VFL_Two_Party.ipynb). Let's get started !"
"The methodology is described using an existing [OpenFL Two Party VFL Tutorial](https://github.com/securefederatedai/openfl/blob/develop/openfl-tutorials/experimental/Vertical_FL/Workflow_API_VFL_Two_Party.ipynb). Let's get started !"
]
},
{
Expand Down Expand Up @@ -60,7 +60,7 @@
"id": "d65f17c2-a772-4f62-848e-9ba6ad1ab128",
"metadata": {},
"source": [
"We start by installing OpenFL and dependencies of the workflow interface \n",
"We start by installing OpenFL and dependencies of the workflow API \n",
"> These dependencies are required to be exported and become the requirements for the Federated Learning Workspace "
]
},
Expand All @@ -74,7 +74,7 @@
"#| export\n",
"\n",
"!pip install git+https://github.com/intel/openfl.git\n",
"!pip install -r ../requirements_workflow_interface.txt\n",
"!pip install -r ../requirements_workflow_api.txt\n",
"!pip install torch\n",
"!pip install torchvision"
]
Expand Down Expand Up @@ -356,7 +356,7 @@
"from openfl.experimental.workspace_export import WorkspaceExport\n",
"\n",
"WorkspaceExport.export(\n",
" notebook_path='./Workflow_Interface_VFL_Two_Party_Workspace_Creation_from_JupyterNotebook.ipynb',\n",
" notebook_path='./Workflow_API_VFL_Two_Party_Workspace_Creation_from_JupyterNotebook.ipynb',\n",
" output_workspace=f\"/home/{os.environ['USER']}/generated-workspace\"\n",
")"
]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
"outputs": [],
"source": [
"!pip install git+https://github.com/intel/openfl.git\n",
"!pip install -r ../requirements_workflow_interface.txt"
"!pip install -r ../requirements_workflow_api.txt"
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,9 @@
"id": "79a3d106",
"metadata": {},
"source": [
"# Workflow Interface 102: \n",
"# Workflow API 102: \n",
"# Vision Transformer for Image Classification using MedMNIST\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/intel/openfl/blob/develop/openfl-tutorials/experimental/Vision_Transformer/Workflow_Interface_102_Vision_Transformer.ipynb)"
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/intel/openfl/blob/develop/openfl-tutorials/experimental/Vision_Transformer/Workflow_API_102_Vision_Transformer.ipynb)"
]
},
{
Expand All @@ -24,7 +24,7 @@
"\n",
"In contrast to tradition convolutional neural networks which focus on capturing local image features within a spatial window using a sliding filter, the self-attention mechanism enables vision transformers to capture global relationships between image patches. \n",
"\n",
"In this tutorial, you will learn how to set up a horizontal federated learning workflow using the OpenFL Experimental Workflow Interface to train a vision transformer to classify images from the MedMNIST dataset. This notebook expands on the use case from the [first](https://github.com/intel/openfl/blob/develop/openfl-tutorials/experimental/Workflow_Interface_101_MNIST.ipynb) quick start notebook. Its objective is to demonstrate how a user can modify the workflow interface for different use cases"
"In this tutorial, you will learn how to set up a horizontal federated learning workflow using the OpenFL Experimental Workflow API to train a vision transformer to classify images from the MedMNIST dataset. This notebook expands on the use case from the [first](https://github.com/intel/openfl/blob/develop/openfl-tutorials/experimental/Workflow_API_101_MNIST.ipynb) quick start notebook. Its objective is to demonstrate how a user can modify the workflow API for different use cases"
]
},
{
Expand All @@ -40,7 +40,7 @@
"id": "7085394d",
"metadata": {},
"source": [
"First we start by installing the necessary dependencies for the workflow interface and the vision transformer"
"First we start by installing the necessary dependencies for the workflow API and the vision transformer"
]
},
{
Expand All @@ -51,11 +51,11 @@
"outputs": [],
"source": [
"!pip install git+https://github.com/intel/openfl.git\n",
"!pip install -r ../requirements_workflow_interface.txt\n",
"!pip install -r ../requirements_workflow_api.txt\n",
"!pip install -r requirements_vision_transformer.txt\n",
"\n",
"# Uncomment this if running in Google Colab\n",
"#!pip install -r https://raw.githubusercontent.com/intel/openfl/develop/openfl-tutorials/experimental/requirements_workflow_interface.txt\n",
"#!pip install -r https://raw.githubusercontent.com/intel/openfl/develop/openfl-tutorials/experimental/requirements_workflow_api.txt\n",
"#!pip install -r https://raw.githubusercontent.com/intel/openfl/develop/openfl-tutorials/experimental/Vision_Transformer/requirements_vision_transformer.txt\n",
"\n",
"#import os\n",
Expand Down Expand Up @@ -271,15 +271,15 @@
"id": "68741136",
"metadata": {},
"source": [
"# Setting up the OpenFL Workflow Interface"
"# Setting up the OpenFL Workflow API"
]
},
{
"cell_type": "markdown",
"id": "2194cccb",
"metadata": {},
"source": [
"We will now set up the experimental OpenFL workflow interface in order to fine-tune our model in a horizontal federated learning framework. We import the `FLSpec`, `LocalRuntime`, and placement decorators.\n",
"We will now set up the experimental OpenFL workflow API in order to fine-tune our model in a horizontal federated learning framework. We import the `FLSpec`, `LocalRuntime`, and placement decorators.\n",
"\n",
"- `FLSpec` – Defines the flow specification. User defined flows are subclasses of this.\n",
"- `Runtime` – Defines where the flow runs, infrastructure for task transitions (how information gets sent). The `LocalRuntime` runs the flow on a single node.\n",
Expand Down Expand Up @@ -317,11 +317,11 @@
"id": "5247407f",
"metadata": {},
"source": [
"Now we come to the flow definition. The OpenFL Workflow Interface adopts the conventions set by Metaflow, that every workflow begins with `start` and concludes with the `end` task. The aggregator begins with a base model and optimizer. The aggregator begins the flow with the `start` task, where the list of collaborators is extracted from the runtime (`self.collaborators = self.runtime.collaborators`) and is then used as the list of participants to run the task listed in `self.next`, `aggregated_model_validation`. The model, optimizer, and anything that is not explicitly excluded from the next function will be passed from the `start` function on the aggregator to the `aggregated_model_validation` task on the collaborator. Where the tasks run is determined by the placement decorator that precedes each task definition (`@aggregator` or `@collaborator`). Once each of the collaborators (defined in the runtime) complete the `aggregated_model_validation` task, they pass their current state onto the `train` task, from `train` to `local_model_validation`, and then finally to `join` at the aggregator. It is in `join` that an average is taken of the model weights, and the next round can begin. Throughout the process, we will save out the collaborator models as well as the final aggregated model.\n",
"Now we come to the flow definition. The OpenFL Workflow API adopts the conventions set by Metaflow, that every workflow begins with `start` and concludes with the `end` task. The aggregator begins with a base model and optimizer. The aggregator begins the flow with the `start` task, where the list of collaborators is extracted from the runtime (`self.collaborators = self.runtime.collaborators`) and is then used as the list of participants to run the task listed in `self.next`, `aggregated_model_validation`. The model, optimizer, and anything that is not explicitly excluded from the next function will be passed from the `start` function on the aggregator to the `aggregated_model_validation` task on the collaborator. Where the tasks run is determined by the placement decorator that precedes each task definition (`@aggregator` or `@collaborator`). Once each of the collaborators (defined in the runtime) complete the `aggregated_model_validation` task, they pass their current state onto the `train` task, from `train` to `local_model_validation`, and then finally to `join` at the aggregator. It is in `join` that an average is taken of the model weights, and the next round can begin. Throughout the process, we will save out the collaborator models as well as the final aggregated model.\n",
"\n",
"| <img src=\"images/workflow.png\" width=\"512\"> | \n",
"|:--:| \n",
"| *General OpenFL Workflow Interface architecture* |"
"| *General OpenFL Workflow API architecture* |"
]
},
{
Expand Down
Loading

0 comments on commit 30c5cd3

Please sign in to comment.