Skip to content

Commit

Permalink
Added GPU for aggregator
Browse files Browse the repository at this point in the history
Signed-off-by: Parth Mandaliya <[email protected]>
  • Loading branch information
ParthM-GitHub committed Sep 28, 2023
1 parent 42da8e7 commit ec4a92c
Showing 1 changed file with 3 additions and 3 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -291,7 +291,7 @@
"source": [
"In this step we define entities necessary to run the flow and create a function which returns dataset as private attributes of collaborator. As described in [quickstart](https://github.com/securefederatedai/openfl/blob/develop/openfl-tutorials/experimental/Workflow_Interface_101_MNIST.ipynb) we define entities necessary for the flow.\n",
"\n",
"To request GPU(s) with ray-backend, we specify `num_gpus=0.5` as the argument while instantiating Collaborator, this will reserve 0.5 GPU for each of the 2 collaborators and therefore require a dedicated GPU for the exeriment. Tune this based on your use case, for example `num_gpus=0.5` for an experiment with 4 collaborators will require 2 dedicated GPUs. **NOTE:** Collaborator cannot span over multiple GPUs, for example `num_gpus=0.4` with 5 collaborators will require 3 dedicated GPUs. In this case collaborator 1 and 2 use GPU#1, collaborator 3 and 4 use GPU#2, and collaborator 5 uses GPU#3."
"To request GPU(s) with ray-backend, we specify `num_gpus=0.3` as the argument while instantiating Aggregator and Collaborator, this will reserve 0.3 GPU for each of the 2 collaborators and the aggregator and therefore require a dedicated GPU for the experiment. Tune this based on your use case, for example `num_gpus=0.4` for an experiment with 4 collaborators and the aggregator will require 2 dedicated GPUs. **NOTE:** Collaborator cannot span over multiple GPUs, for example `num_gpus=0.4` with 5 collaborators will require 3 dedicated GPUs. In this case collaborator 1 and 2 use GPU#1, collaborator 3 and 4 use GPU#2, and collaborator 5 uses GPU#3."
]
},
{
Expand All @@ -302,7 +302,7 @@
"outputs": [],
"source": [
"# Setup Aggregator private attributes via callable function\n",
"aggregator = Aggregator()\n",
"aggregator = Aggregator(num_gpus=0.3)\n",
"\n",
"collaborator_names = ['Portland', 'Seattle']\n",
"\n",
Expand All @@ -325,7 +325,7 @@
"for idx, collaborator_name in enumerate(collaborator_names):\n",
" collaborators.append(\n",
" Collaborator(\n",
" name=collaborator_name, num_cpus=0, num_gpus=0.5,\n",
" name=collaborator_name, num_cpus=0, num_gpus=0.3,\n",
" private_attributes_callable=callable_to_initialize_collaborator_private_attributes,\n",
" index=idx, n_collaborators=len(collaborator_names),\n",
" train_dataset=mnist_train, test_dataset=mnist_test, batch_size_train=batch_size_train\n",
Expand Down

0 comments on commit ec4a92c

Please sign in to comment.