diff --git a/.devcontainer/Dockerfile b/.devcontainer/Dockerfile index 2cf82414df56..016c12af2426 100644 --- a/.devcontainer/Dockerfile +++ b/.devcontainer/Dockerfile @@ -29,8 +29,8 @@ RUN apt install -y curl wget gnupg python3 python-is-python3 python3-pip git \ build-essential tmux vim RUN python -m pip install \ - pip==23.1.2 \ - setuptools==68.0.0 \ + pip==23.3.1 \ + setuptools==68.2.2 \ poetry==1.5.1 USER $USERNAME diff --git a/.github/actions/bootstrap/action.yml b/.github/actions/bootstrap/action.yml index 3865cad1def6..584ae2634d9e 100644 --- a/.github/actions/bootstrap/action.yml +++ b/.github/actions/bootstrap/action.yml @@ -6,10 +6,10 @@ inputs: default: 3.8 pip-version: description: "Version of pip to be installed using pip" - default: 23.1.2 + default: 23.3.1 setuptools-version: description: "Version of setuptools to be installed using pip" - default: 68.0.0 + default: 68.2.2 poetry-version: description: "Version of poetry to be installed using pip" default: 1.5.1 diff --git a/.github/workflows/cpp.yml b/.github/workflows/cpp.yml index 16cd672ef034..35fe9813329e 100644 --- a/.github/workflows/cpp.yml +++ b/.github/workflows/cpp.yml @@ -35,9 +35,14 @@ jobs: sudo apt-get update sudo apt-get install -y clang-format cmake g++ clang-tidy cppcheck - - name: Check Formatting + - name: Check source Formatting run: | - find src/cc/flwr -name '*.cc' -or -name '*.h' | xargs clang-format -i + find src/cc/flwr/src -name '*.cc' | xargs clang-format -i + git diff --exit-code + + - name: Check header Formatting + run: | + find src/cc/flwr/include -name '*.h' -not -path "src/cc/flwr/include/flwr/*" | xargs clang-format -i git diff --exit-code - name: Build diff --git a/README.md b/README.md index efed9b0e477e..002d16066e78 100644 --- a/README.md +++ b/README.md @@ -23,22 +23,21 @@ Flower (`flwr`) is a framework for building federated learning systems. The design of Flower is based on a few guiding principles: -* **Customizable**: Federated learning systems vary wildly from one use case to +- **Customizable**: Federated learning systems vary wildly from one use case to another. Flower allows for a wide range of different configurations depending on the needs of each individual use case. -* **Extendable**: Flower originated from a research project at the University of +- **Extendable**: Flower originated from a research project at the University of Oxford, so it was built with AI research in mind. Many components can be extended and overridden to build new state-of-the-art systems. -* **Framework-agnostic**: Different machine learning frameworks have different +- **Framework-agnostic**: Different machine learning frameworks have different strengths. Flower can be used with any machine learning framework, for example, [PyTorch](https://pytorch.org), - [TensorFlow](https://tensorflow.org), [Hugging Face Transformers](https://huggingface.co/), [PyTorch Lightning](https://pytorchlightning.ai/), [MXNet](https://mxnet.apache.org/), [scikit-learn](https://scikit-learn.org/), [JAX](https://jax.readthedocs.io/), [TFLite](https://tensorflow.org/lite/), [fastai](https://www.fast.ai/), [Pandas](https://pandas.pydata.org/ -) for federated analytics, or even raw [NumPy](https://numpy.org/) + [TensorFlow](https://tensorflow.org), [Hugging Face Transformers](https://huggingface.co/), [PyTorch Lightning](https://pytorchlightning.ai/), [MXNet](https://mxnet.apache.org/), [scikit-learn](https://scikit-learn.org/), [JAX](https://jax.readthedocs.io/), [TFLite](https://tensorflow.org/lite/), [fastai](https://www.fast.ai/), [Pandas](https://pandas.pydata.org/) for federated analytics, or even raw [NumPy](https://numpy.org/) for users who enjoy computing gradients by hand. -* **Understandable**: Flower is written with maintainability in mind. The +- **Understandable**: Flower is written with maintainability in mind. The community is encouraged to both read and contribute to the codebase. Meet the Flower community on [flower.dev](https://flower.dev)! @@ -58,11 +57,11 @@ Flower's goal is to make federated learning accessible to everyone. This series 2. **Using Strategies in Federated Learning** [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/adap/flower/blob/main/doc/source/tutorial-use-a-federated-learning-strategy-pytorch.ipynb) (or open the [Jupyter Notebook](https://github.com/adap/flower/blob/main/doc/source/tutorial-use-a-federated-learning-strategy-pytorch.ipynb)) - + 3. **Building Strategies for Federated Learning** [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/adap/flower/blob/main/doc/source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb) (or open the [Jupyter Notebook](https://github.com/adap/flower/blob/main/doc/source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb)) - + 4. **Custom Clients for Federated Learning** [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/adap/flower/blob/main/doc/source/tutorial-series-customize-the-client-pytorch.ipynb) (or open the [Jupyter Notebook](https://github.com/adap/flower/blob/main/doc/source/tutorial-series-customize-the-client-pytorch.ipynb)) @@ -73,39 +72,39 @@ Stay tuned, more tutorials are coming soon. Topics include **Privacy and Securit [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/adap/flower/blob/main/examples/flower-in-30-minutes/tutorial.ipynb) (or open the [Jupyter Notebook](https://github.com/adap/flower/blob/main/examples/flower-in-30-minutes/tutorial.ipynb)) - ## Documentation [Flower Docs](https://flower.dev/docs): -* [Installation](https://flower.dev/docs/framework/how-to-install-flower.html) -* [Quickstart (TensorFlow)](https://flower.dev/docs/framework/tutorial-quickstart-tensorflow.html) -* [Quickstart (PyTorch)](https://flower.dev/docs/framework/tutorial-quickstart-pytorch.html) -* [Quickstart (Hugging Face)](https://flower.dev/docs/framework/tutorial-quickstart-huggingface.html) -* [Quickstart (PyTorch Lightning [code example])](https://flower.dev/docs/framework/tutorial-quickstart-pytorch-lightning.html) -* [Quickstart (MXNet)](https://flower.dev/docs/framework/example-mxnet-walk-through.html) -* [Quickstart (Pandas)](https://flower.dev/docs/framework/tutorial-quickstart-pandas.html) -* [Quickstart (fastai)](https://flower.dev/docs/framework/tutorial-quickstart-fastai.html) -* [Quickstart (JAX)](https://flower.dev/docs/framework/tutorial-quickstart-jax.html) -* [Quickstart (scikit-learn)](https://flower.dev/docs/framework/tutorial-quickstart-scikitlearn.html) -* [Quickstart (Android [TFLite])](https://flower.dev/docs/framework/tutorial-quickstart-android.html) -* [Quickstart (iOS [CoreML])](https://flower.dev/docs/framework/tutorial-quickstart-ios.html) + +- [Installation](https://flower.dev/docs/framework/how-to-install-flower.html) +- [Quickstart (TensorFlow)](https://flower.dev/docs/framework/tutorial-quickstart-tensorflow.html) +- [Quickstart (PyTorch)](https://flower.dev/docs/framework/tutorial-quickstart-pytorch.html) +- [Quickstart (Hugging Face)](https://flower.dev/docs/framework/tutorial-quickstart-huggingface.html) +- [Quickstart (PyTorch Lightning [code example])](https://flower.dev/docs/framework/tutorial-quickstart-pytorch-lightning.html) +- [Quickstart (MXNet)](https://flower.dev/docs/framework/example-mxnet-walk-through.html) +- [Quickstart (Pandas)](https://flower.dev/docs/framework/tutorial-quickstart-pandas.html) +- [Quickstart (fastai)](https://flower.dev/docs/framework/tutorial-quickstart-fastai.html) +- [Quickstart (JAX)](https://flower.dev/docs/framework/tutorial-quickstart-jax.html) +- [Quickstart (scikit-learn)](https://flower.dev/docs/framework/tutorial-quickstart-scikitlearn.html) +- [Quickstart (Android [TFLite])](https://flower.dev/docs/framework/tutorial-quickstart-android.html) +- [Quickstart (iOS [CoreML])](https://flower.dev/docs/framework/tutorial-quickstart-ios.html) ## Flower Baselines Flower Baselines is a collection of community-contributed experiments that reproduce the experiments performed in popular federated learning publications. Researchers can build on Flower Baselines to quickly evaluate new ideas: -* [FedAvg](https://arxiv.org/abs/1602.05629): - * [MNIST](https://github.com/adap/flower/tree/main/baselines/flwr_baselines/flwr_baselines/publications/fedavg_mnist) -* [FedProx](https://arxiv.org/abs/1812.06127): - * [MNIST](https://github.com/adap/flower/tree/main/baselines/fedprox/) -* [FedBN: Federated Learning on non-IID Features via Local Batch Normalization](https://arxiv.org/abs/2102.07623): - * [Convergence Rate](https://github.com/adap/flower/tree/main/baselines/flwr_baselines/flwr_baselines/publications/fedbn/convergence_rate) -* [Adaptive Federated Optimization](https://arxiv.org/abs/2003.00295): - * [CIFAR-10/100](https://github.com/adap/flower/tree/main/baselines/flwr_baselines/flwr_baselines/publications/adaptive_federated_optimization) +- [FedAvg](https://arxiv.org/abs/1602.05629): + - [MNIST](https://github.com/adap/flower/tree/main/baselines/flwr_baselines/flwr_baselines/publications/fedavg_mnist) +- [FedProx](https://arxiv.org/abs/1812.06127): + - [MNIST](https://github.com/adap/flower/tree/main/baselines/fedprox/) +- [FedBN: Federated Learning on non-IID Features via Local Batch Normalization](https://arxiv.org/abs/2102.07623): + - [Convergence Rate](https://github.com/adap/flower/tree/main/baselines/flwr_baselines/flwr_baselines/publications/fedbn/convergence_rate) +- [Adaptive Federated Optimization](https://arxiv.org/abs/2003.00295): + - [CIFAR-10/100](https://github.com/adap/flower/tree/main/baselines/flwr_baselines/flwr_baselines/publications/adaptive_federated_optimization) -Check the Flower documentation to learn more: [Using Baselines](https://flower.dev/docs/baselines/using-baselines.html) +Check the Flower documentation to learn more: [Using Baselines](https://flower.dev/docs/baselines/how-to-use-baselines.html) -The Flower community loves contributions! Make your work more visible and enable others to build on it by contributing it as a baseline: [Contributing Baselines](https://flower.dev/docs/baselines/contributing-baselines.html) +The Flower community loves contributions! Make your work more visible and enable others to build on it by contributing it as a baseline: [Contributing Baselines](https://flower.dev/docs/baselines/how-to-contribute-baselines.html) ## Flower Usage Examples @@ -113,26 +112,26 @@ Several code examples show different usage scenarios of Flower (in combination w Quickstart examples: -* [Quickstart (TensorFlow)](https://github.com/adap/flower/tree/main/examples/quickstart-tensorflow) -* [Quickstart (PyTorch)](https://github.com/adap/flower/tree/main/examples/quickstart-pytorch) -* [Quickstart (Hugging Face)](https://github.com/adap/flower/tree/main/examples/quickstart-huggingface) -* [Quickstart (PyTorch Lightning)](https://github.com/adap/flower/tree/main/examples/quickstart-pytorch-lightning) -* [Quickstart (fastai)](https://github.com/adap/flower/tree/main/examples/quickstart-fastai) -* [Quickstart (Pandas)](https://github.com/adap/flower/tree/main/examples/quickstart-pandas) -* [Quickstart (MXNet)](https://github.com/adap/flower/tree/main/examples/quickstart-mxnet) -* [Quickstart (JAX)](https://github.com/adap/flower/tree/main/examples/quickstart-jax) -* [Quickstart (scikit-learn)](https://github.com/adap/flower/tree/main/examples/sklearn-logreg-mnist) -* [Quickstart (Android [TFLite])](https://github.com/adap/flower/tree/main/examples/android) -* [Quickstart (iOS [CoreML])](https://github.com/adap/flower/tree/main/examples/ios) +- [Quickstart (TensorFlow)](https://github.com/adap/flower/tree/main/examples/quickstart-tensorflow) +- [Quickstart (PyTorch)](https://github.com/adap/flower/tree/main/examples/quickstart-pytorch) +- [Quickstart (Hugging Face)](https://github.com/adap/flower/tree/main/examples/quickstart-huggingface) +- [Quickstart (PyTorch Lightning)](https://github.com/adap/flower/tree/main/examples/quickstart-pytorch-lightning) +- [Quickstart (fastai)](https://github.com/adap/flower/tree/main/examples/quickstart-fastai) +- [Quickstart (Pandas)](https://github.com/adap/flower/tree/main/examples/quickstart-pandas) +- [Quickstart (MXNet)](https://github.com/adap/flower/tree/main/examples/quickstart-mxnet) +- [Quickstart (JAX)](https://github.com/adap/flower/tree/main/examples/quickstart-jax) +- [Quickstart (scikit-learn)](https://github.com/adap/flower/tree/main/examples/sklearn-logreg-mnist) +- [Quickstart (Android [TFLite])](https://github.com/adap/flower/tree/main/examples/android) +- [Quickstart (iOS [CoreML])](https://github.com/adap/flower/tree/main/examples/ios) Other [examples](https://github.com/adap/flower/tree/main/examples): -* [Raspberry Pi & Nvidia Jetson Tutorial](https://github.com/adap/flower/tree/main/examples/embedded-devices) -* [PyTorch: From Centralized to Federated](https://github.com/adap/flower/tree/main/examples/pytorch-from-centralized-to-federated) -* [MXNet: From Centralized to Federated](https://github.com/adap/flower/tree/main/examples/mxnet-from-centralized-to-federated) -* [Advanced Flower with TensorFlow/Keras](https://github.com/adap/flower/tree/main/examples/advanced-tensorflow) -* [Advanced Flower with PyTorch](https://github.com/adap/flower/tree/main/examples/advanced-pytorch) -* Single-Machine Simulation of Federated Learning Systems ([PyTorch](https://github.com/adap/flower/tree/main/examples/simulation_pytorch)) ([Tensorflow](https://github.com/adap/flower/tree/main/examples/simulation_tensorflow)) +- [Raspberry Pi & Nvidia Jetson Tutorial](https://github.com/adap/flower/tree/main/examples/embedded-devices) +- [PyTorch: From Centralized to Federated](https://github.com/adap/flower/tree/main/examples/pytorch-from-centralized-to-federated) +- [MXNet: From Centralized to Federated](https://github.com/adap/flower/tree/main/examples/mxnet-from-centralized-to-federated) +- [Advanced Flower with TensorFlow/Keras](https://github.com/adap/flower/tree/main/examples/advanced-tensorflow) +- [Advanced Flower with PyTorch](https://github.com/adap/flower/tree/main/examples/advanced-pytorch) +- Single-Machine Simulation of Federated Learning Systems ([PyTorch](https://github.com/adap/flower/tree/main/examples/simulation_pytorch)) ([Tensorflow](https://github.com/adap/flower/tree/main/examples/simulation_tensorflow)) ## Community @@ -144,12 +143,12 @@ Flower is built by a wonderful community of researchers and engineers. [Join Sla ## Citation -If you publish work that uses Flower, please cite Flower as follows: +If you publish work that uses Flower, please cite Flower as follows: ```bibtex @article{beutel2020flower, title={Flower: A Friendly Federated Learning Research Framework}, - author={Beutel, Daniel J and Topal, Taner and Mathur, Akhil and Qiu, Xinchi and Fernandez-Marques, Javier and Gao, Yan and Sani, Lorenzo and Kwing, Hei Li and Parcollet, Titouan and Gusmão, Pedro PB de and Lane, Nicholas D}, + author={Beutel, Daniel J and Topal, Taner and Mathur, Akhil and Qiu, Xinchi and Fernandez-Marques, Javier and Gao, Yan and Sani, Lorenzo and Kwing, Hei Li and Parcollet, Titouan and Gusmão, Pedro PB de and Lane, Nicholas D}, journal={arXiv preprint arXiv:2007.14390}, year={2020} } diff --git a/baselines/depthfl/.gitignore b/baselines/depthfl/.gitignore new file mode 100644 index 000000000000..fb7448bbcb01 --- /dev/null +++ b/baselines/depthfl/.gitignore @@ -0,0 +1,4 @@ +dataset/ +outputs/ +prev_grads/ +multirun/ \ No newline at end of file diff --git a/baselines/depthfl/LICENSE b/baselines/depthfl/LICENSE new file mode 100644 index 000000000000..d64569567334 --- /dev/null +++ b/baselines/depthfl/LICENSE @@ -0,0 +1,202 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/baselines/depthfl/README.md b/baselines/depthfl/README.md new file mode 100644 index 000000000000..b8ab7ed18571 --- /dev/null +++ b/baselines/depthfl/README.md @@ -0,0 +1,171 @@ +--- +title: DepthFL:Depthwise Federated Learning for Heterogeneous Clients +url: https://openreview.net/forum?id=pf8RIZTMU58 +labels: [image classification, system heterogeneity, cross-device, knowledge distillation] +dataset: [CIFAR-100] +--- + +# DepthFL: Depthwise Federated Learning for Heterogeneous Clients + +> Note: If you use this baseline in your work, please remember to cite the original authors of the paper as well as the Flower paper. + +**Paper:** [openreview.net/forum?id=pf8RIZTMU58](https://openreview.net/forum?id=pf8RIZTMU58) + +**Authors:** Minjae Kim, Sangyoon Yu, Suhyun Kim, Soo-Mook Moon + +**Abstract:** Federated learning is for training a global model without collecting private local data from clients. As they repeatedly need to upload locally-updated weights or gradients instead, clients require both computation and communication resources enough to participate in learning, but in reality their resources are heterogeneous. To enable resource-constrained clients to train smaller local models, width scaling techniques have been used, which reduces the channels of a global model. Unfortunately, width scaling suffers from heterogeneity of local models when averaging them, leading to a lower accuracy than when simply excluding resource-constrained clients from training. This paper proposes a new approach based on depth scaling called DepthFL. DepthFL defines local models of different depths by pruning the deepest layers off the global model, and allocates them to clients depending on their available resources. Since many clients do not have enough resources to train deep local models, this would make deep layers partially-trained with insufficient data, unlike shallow layers that are fully trained. DepthFL alleviates this problem by mutual self-distillation of knowledge among the classifiers of various depths within a local model. Our experiments show that depth-scaled local models build a global model better than width-scaled ones, and that self-distillation is highly effective in training data-insufficient deep layers. + + +## About this baseline + +**What’s implemented:** The code in this directory replicates the experiments in DepthFL: Depthwise Federated Learning for Heterogeneous Clients (Kim et al., 2023) for CIFAR100, which proposed the DepthFL algorithm. Concretely, it replicates the results for CIFAR100 dataset in Table 2, 3 and 4. + +**Datasets:** CIFAR100 from PyTorch's Torchvision + +**Hardware Setup:** These experiments were run on a server with Nvidia 3090 GPUs. Any machine with 1x 8GB GPU or more would be able to run it in a reasonable amount of time. With the default settings, clients make use of 1.3GB of VRAM. Lower `num_gpus` in `client_resources` to train more clients in parallel on your GPU(s). + +**Contributors:** Minjae Kim + + +## Experimental Setup + +**Task:** Image Classification + +**Model:** ResNet18 + +**Dataset:** This baseline only includes the CIFAR100 dataset. By default it will be partitioned into 100 clients following IID distribution. The settings are as follow: + +| Dataset | #classes | #partitions | partitioning method | +| :------ | :---: | :---: | :---: | +| CIFAR100 | 100 | 100 | IID or Non-IID | + +**Training Hyperparameters:** +The following table shows the main hyperparameters for this baseline with their default value (i.e. the value used if you run `python -m depthfl.main` directly) + +| Description | Default Value | +| ----------- | ----- | +| total clients | 100 | +| local epoch | 5 | +| batch size | 50 | +| number of rounds | 1000 | +| participation ratio | 10% | +| learning rate | 0.1 | +| learning rate decay | 0.998 | +| client resources | {'num_cpus': 1.0, 'num_gpus': 0.5 }| +| data partition | IID | +| optimizer | SGD with dynamic regularization | +| alpha | 0.1 | + + +## Environment Setup + +To construct the Python environment follow these steps: + +```bash +# Set python version +pyenv install 3.10.6 +pyenv local 3.10.6 + +# Tell poetry to use python 3.10 +poetry env use 3.10.6 + +# Install the base Poetry environment +poetry install + +# Activate the environment +poetry shell +``` + + +## Running the Experiments + +To run this DepthFL, first ensure you have activated your Poetry environment (execute `poetry shell` from this directory), then: + +```bash +# this will run using the default settings in the `conf/config.yaml` +python -m depthfl.main # 'accuracy' : accuracy of the ensemble model, 'accuracy_single' : accuracy of each classifier. + +# you can override settings directly from the command line +python -m depthfl.main exclusive_learning=true model_size=1 # exclusive learning - 100% (a) +python -m depthfl.main exclusive_learning=true model_size=4 # exclusive learning - 25% (d) +python -m depthfl.main fit_config.feddyn=false fit_config.kd=false # DepthFL (FedAvg) +python -m depthfl.main fit_config.feddyn=false fit_config.kd=false fit_config.extended=false # InclusiveFL +``` + +To run using HeteroFL: +```bash +# since sbn takes too long, we test global model every 50 rounds. +python -m depthfl.main --config-name="heterofl" # HeteroFL +python -m depthfl.main --config-name="heterofl" exclusive_learning=true model_size=1 # exclusive learning - 100% (a) +``` + +### Stateful clients comment + +To implement `feddyn`, stateful clients that store prev_grads information are needed. Since flwr does not yet officially support stateful clients, it was implemented as a temporary measure by loading `prev_grads` from disk when creating a client, and then storing it again on disk after learning. Specifically, there are files that store the state of each client in the `prev_grads` folder. When the strategy is instantiated (for both `FedDyn` and `HeteroFL`) the content of `prev_grads` is reset. + + +## Expected Results + +With the following command we run DepthFL (FedDyn / FedAvg), InclusiveFL, and HeteroFL to replicate the results of table 2,3,4 in DepthFL paper. Tables 2, 3, and 4 may contain results from the same experiment in multiple tables. + +```bash +# table 2 (HeteroFL row) +python -m depthfl.main --config-name="heterofl" +python -m depthfl.main --config-name="heterofl" --multirun exclusive_learning=true model.scale=false model_size=1,2,3,4 + +# table 2 (DepthFL(FedAvg) row) +python -m depthfl.main fit_config.feddyn=false fit_config.kd=false +python -m depthfl.main --multirun fit_config.feddyn=false fit_config.kd=false exclusive_learning=true model_size=1,2,3,4 + +# table 2 (DepthFL row) +python -m depthfl.main +python -m depthfl.main --multirun exclusive_learning=true model_size=1,2,3,4 +``` + +**Table 2** + +100% (a), 75%(b), 50%(c), 25% (d) cases are exclusive learning scenario. 100% (a) exclusive learning means, the global model and every local model are equal to the smallest local model, and 100% clients participate in learning. Likewise, 25% (d) exclusive learning means, the global model and every local model are equal to the larget local model, and only 25% clients participate in learning. + +| Scaling Method | Dataset | Global Model | 100% (a) | 75% (b) | 50% (c) | 25% (d) | +| :---: | :---: | :---: | :---: | :---: | :---: | :---: | +| HeteroFL
DepthFL (FedAvg)
DepthFL | CIFAR100 | 57.61
72.67
76.06 | 64.39
67.08
69.68 | 66.08
70.78
73.21 | 62.03
68.41
70.29 | 51.99
59.17
60.32 | + +```bash +# table 3 (Width Scaling - Duplicate results from table 2) +python -m depthfl.main --config-name="heterofl" +python -m depthfl.main --config-name="heterofl" --multirun exclusive_learning=true model.scale=false model_size=1,2,3,4 + +# table 3 (Depth Scaling : Exclusive Learning, DepthFL(FedAvg) rows - Duplicate results from table 2) +python -m depthfl.main fit_config.feddyn=false fit_config.kd=false +python -m depthfl.main --multirun fit_config.feddyn=false fit_config.kd=false exclusive_learning=true model_size=1,2,3,4 + +## table 3 (Depth Scaling - InclusiveFL row) +python -m depthfl.main fit_config.feddyn=false fit_config.kd=false fit_config.extended=false +``` + +**Table 3** + +Accuracy of global sub-models compared to exclusive learning on CIFAR-100. + +| Method | Algorithm | Classifier 1/4 | Classifier 2/4 | Classifier 3/4 | Classifier 4/4 | +| :---: | :---: | :---: | :---: | :---: | :---: | +| Width Scaling | Exclusive Learning
HeteroFL| 64.39
51.08 | 66.08
55.89 | 62.03
58.29 | 51.99
57.61 | + +| Method | Algorithm | Classifier 1/4 | Classifier 2/4 | Classifier 3/4 | Classifier 4/4 | +| :---: | :---: | :---: | :---: | :---: | :---: | +| Depth Scaling | Exclusive Learning
InclusiveFL
DepthFL (FedAvg) | 67.08
47.61
66.18 | 68.00
53.88
67.56 | 66.19
59.48
67.97 | 56.78
60.46
68.01 | + +```bash +# table 4 +python -m depthfl.main --multirun fit_config.kd=true,false dataset_config.iid=true,false +``` + +**Table 4** + +Accuracy of the global model with/without self distillation on CIFAR-100. + +| Distribution | Dataset | KD | Classifier 1/4 | Classifier 2/4 | Classifier 3/4 | Classifier 4/4 | Ensemble | +| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | +| IID | CIFAR100 | ✗
✓ | 70.13
71.74 | 69.63
73.35 | 68.92
73.57 | 68.92
73.55 | 74.48
76.06 | +| non-IID | CIFAR100 | ✗
✓ | 67.94
70.33 | 68.68
71.88 | 68.46
72.43 | 67.78
72.34 | 73.18
74.92 | + diff --git a/baselines/depthfl/depthfl/__init__.py b/baselines/depthfl/depthfl/__init__.py new file mode 100644 index 000000000000..3343905e1879 --- /dev/null +++ b/baselines/depthfl/depthfl/__init__.py @@ -0,0 +1 @@ +"""Flower summer of reproducibility : DepthFL (ICLR' 23).""" diff --git a/baselines/depthfl/depthfl/client.py b/baselines/depthfl/depthfl/client.py new file mode 100644 index 000000000000..481ac90f1c79 --- /dev/null +++ b/baselines/depthfl/depthfl/client.py @@ -0,0 +1,181 @@ +"""Defines the DepthFL Flower Client and a function to instantiate it.""" + +import copy +import pickle +from collections import OrderedDict +from typing import Callable, Dict, List, Tuple + +import flwr as fl +import numpy as np +import torch +from flwr.common.typing import NDArrays, Scalar +from hydra.utils import instantiate +from omegaconf import DictConfig +from torch.utils.data import DataLoader + +from depthfl.models import test, train + + +def prune(state_dict, param_idx): + """Prune width of DNN (for HeteroFL).""" + ret_dict = {} + for k in state_dict.keys(): + if "num" not in k: + ret_dict[k] = state_dict[k][torch.meshgrid(param_idx[k])] + else: + ret_dict[k] = state_dict[k] + return copy.deepcopy(ret_dict) + + +class FlowerClient( + fl.client.NumPyClient +): # pylint: disable=too-many-instance-attributes + """Standard Flower client for CNN training.""" + + def __init__( + self, + net: torch.nn.Module, + trainloader: DataLoader, + valloader: DataLoader, + device: torch.device, + num_epochs: int, + learning_rate: float, + learning_rate_decay: float, + prev_grads: Dict, + cid: int, + ): # pylint: disable=too-many-arguments + self.net = net + self.trainloader = trainloader + self.valloader = valloader + self.device = device + self.num_epochs = num_epochs + self.learning_rate = learning_rate + self.learning_rate_decay = learning_rate_decay + self.prev_grads = prev_grads + self.cid = cid + self.param_idx = {} + state_dict = net.state_dict() + + # for HeteroFL + for k in state_dict.keys(): + self.param_idx[k] = [ + torch.arange(size) for size in state_dict[k].shape + ] # store client's weights' shape (for HeteroFL) + + def get_parameters(self, config: Dict[str, Scalar]) -> NDArrays: + """Return the parameters of the current net.""" + return [val.cpu().numpy() for _, val in self.net.state_dict().items()] + + def set_parameters(self, parameters: NDArrays) -> None: + """Change the parameters of the model using the given ones.""" + params_dict = zip(self.net.state_dict().keys(), parameters) + state_dict = OrderedDict({k: torch.tensor(v) for k, v in params_dict}) + self.net.load_state_dict(prune(state_dict, self.param_idx), strict=True) + + def fit( + self, parameters: NDArrays, config: Dict[str, Scalar] + ) -> Tuple[NDArrays, int, Dict]: + """Implement distributed fit function for a given client.""" + self.set_parameters(parameters) + num_epochs = self.num_epochs + + curr_round = int(config["curr_round"]) - 1 + + # consistency weight for self distillation in DepthFL + consistency_weight_constant = 300 + current = np.clip(curr_round, 0.0, consistency_weight_constant) + phase = 1.0 - current / consistency_weight_constant + consistency_weight = float(np.exp(-5.0 * phase * phase)) + + train( + self.net, + self.trainloader, + self.device, + epochs=num_epochs, + learning_rate=self.learning_rate * self.learning_rate_decay**curr_round, + config=config, + consistency_weight=consistency_weight, + prev_grads=self.prev_grads, + ) + + with open(f"prev_grads/client_{self.cid}", "wb") as prev_grads_file: + pickle.dump(self.prev_grads, prev_grads_file) + + return self.get_parameters({}), len(self.trainloader), {"cid": self.cid} + + def evaluate( + self, parameters: NDArrays, config: Dict[str, Scalar] + ) -> Tuple[float, int, Dict]: + """Implement distributed evaluation for a given client.""" + self.set_parameters(parameters) + loss, accuracy, accuracy_single = test(self.net, self.valloader, self.device) + return ( + float(loss), + len(self.valloader), + {"accuracy": float(accuracy), "accuracy_single": accuracy_single}, + ) + + +def gen_client_fn( # pylint: disable=too-many-arguments + num_epochs: int, + trainloaders: List[DataLoader], + valloaders: List[DataLoader], + learning_rate: float, + learning_rate_decay: float, + models: List[DictConfig], +) -> Callable[[str], FlowerClient]: + """Generate the client function that creates the Flower Clients. + + Parameters + ---------- + num_epochs : int + The number of local epochs each client should run the training for before + sending it to the server. + trainloaders: List[DataLoader] + A list of DataLoaders, each pointing to the dataset training partition + belonging to a particular client. + valloaders: List[DataLoader] + A list of DataLoaders, each pointing to the dataset validation partition + belonging to a particular client. + learning_rate : float + The learning rate for the SGD optimizer of clients. + learning_rate_decay : float + The learning rate decay ratio per round for the SGD optimizer of clients. + models : List[DictConfig] + A list of DictConfigs, each pointing to the model config of client's local model + + Returns + ------- + Callable[[str], FlowerClient] + client function that creates Flower Clients + """ + + def client_fn(cid: str) -> FlowerClient: + """Create a Flower client representing a single organization.""" + # Load model + device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") + + # each client gets a different model config (different width / depth) + net = instantiate(models[int(cid)]).to(device) + + # Note: each client gets a different trainloader/valloader, so each client + # will train and evaluate on their own unique data + trainloader = trainloaders[int(cid)] + valloader = valloaders[int(cid)] + + with open(f"prev_grads/client_{int(cid)}", "rb") as prev_grads_file: + prev_grads = pickle.load(prev_grads_file) + + return FlowerClient( + net, + trainloader, + valloader, + device, + num_epochs, + learning_rate, + learning_rate_decay, + prev_grads, + int(cid), + ) + + return client_fn diff --git a/baselines/depthfl/depthfl/conf/config.yaml b/baselines/depthfl/depthfl/conf/config.yaml new file mode 100644 index 000000000000..5a126229956e --- /dev/null +++ b/baselines/depthfl/depthfl/conf/config.yaml @@ -0,0 +1,42 @@ +--- + +num_clients: 100 # total number of clients +num_epochs: 5 # number of local epochs +batch_size: 50 +num_rounds: 1000 +fraction: 0.1 # participation ratio +learning_rate: 0.1 +learning_rate_decay : 0.998 # per round +static_bn: false # static batch normalization (HeteroFL) +exclusive_learning: false # exclusive learning baseline in DepthFL paper +model_size: 1 # model size for exclusive learning + +client_resources: + num_cpus: 1 + num_gpus: 0.5 + +server_device: cuda + +dataset_config: + iid: true + beta: 0.5 + +fit_config: + feddyn: true + kd: true + alpha: 0.1 # alpha for FedDyn + extended: true # if not extended : InclusiveFL + drop_client: false # with FedProx, clients shouldn't be dropped even if they are stragglers + +model: + _target_: depthfl.resnet.multi_resnet18 + n_blocks: 4 # depth (1 ~ 4) + num_classes: 100 + +strategy: + _target_: depthfl.strategy.FedDyn + fraction_fit: 0.00001 # because we want the number of clients to sample on each round to be solely defined by min_fit_clients + fraction_evaluate: 0.0 + # min_fit_clients: ${clients_per_round} + min_evaluate_clients: 0 + # min_available_clients: ${clients_per_round} \ No newline at end of file diff --git a/baselines/depthfl/depthfl/conf/heterofl.yaml b/baselines/depthfl/depthfl/conf/heterofl.yaml new file mode 100644 index 000000000000..ad0bb8c8f8b8 --- /dev/null +++ b/baselines/depthfl/depthfl/conf/heterofl.yaml @@ -0,0 +1,43 @@ +--- + +num_clients: 100 # total number of clients +num_epochs: 5 # number of local epochs +batch_size: 50 +num_rounds: 1000 +fraction: 0.1 # participation ratio +learning_rate: 0.1 +learning_rate_decay : 0.998 # per round +static_bn: true # static batch normalization (HeteroFL) +exclusive_learning: false # exclusive learning baseline in DepthFL paper +model_size: 1 # model size for exclusive learning + +client_resources: + num_cpus: 1 + num_gpus: 0.5 + +server_device: cuda + +dataset_config: + iid: true + beta: 0.5 + +fit_config: + feddyn: false + kd: false + alpha: 0.1 # unused + extended: false # unused + drop_client: false # with FedProx, clients shouldn't be dropped even if they are stragglers + +model: + _target_: depthfl.resnet_hetero.resnet18 + n_blocks: 4 # width (1 ~ 4) + num_classes: 100 + scale: true # scaler module in HeteroFL + +strategy: + _target_: depthfl.strategy_hetero.HeteroFL + fraction_fit: 0.00001 # because we want the number of clients to sample on each round to be solely defined by min_fit_clients + fraction_evaluate: 0.0 + # min_fit_clients: ${clients_per_round} + min_evaluate_clients: 0 + # min_available_clients: ${clients_per_round} \ No newline at end of file diff --git a/baselines/depthfl/depthfl/dataset.py b/baselines/depthfl/depthfl/dataset.py new file mode 100644 index 000000000000..c2024fe068a0 --- /dev/null +++ b/baselines/depthfl/depthfl/dataset.py @@ -0,0 +1,60 @@ +"""CIFAR100 dataset utilities for federated learning.""" + +from typing import Optional, Tuple + +import torch +from omegaconf import DictConfig +from torch.utils.data import DataLoader, random_split + +from depthfl.dataset_preparation import _partition_data + + +def load_datasets( # pylint: disable=too-many-arguments + config: DictConfig, + num_clients: int, + val_ratio: float = 0.0, + batch_size: Optional[int] = 32, + seed: Optional[int] = 41, +) -> Tuple[DataLoader, DataLoader, DataLoader]: + """Create the dataloaders to be fed into the model. + + Parameters + ---------- + config: DictConfig + Parameterises the dataset partitioning process + num_clients : int + The number of clients that hold a part of the data + val_ratio : float, optional + The ratio of training data that will be used for validation (between 0 and 1), + by default 0.1 + batch_size : int, optional + The size of the batches to be fed into the model, by default 32 + seed : int, optional + Used to set a fix seed to replicate experiments, by default 42 + + Returns + ------- + Tuple[DataLoader, DataLoader, DataLoader] + The DataLoader for training, validation, and testing. + """ + print(f"Dataset partitioning config: {config}") + datasets, testset = _partition_data( + num_clients, + iid=config.iid, + beta=config.beta, + seed=seed, + ) + # Split each partition into train/val and create DataLoader + trainloaders = [] + valloaders = [] + for dataset in datasets: + len_val = 0 + if val_ratio > 0: + len_val = int(len(dataset) / (1 / val_ratio)) + lengths = [len(dataset) - len_val, len_val] + ds_train, ds_val = random_split( + dataset, lengths, torch.Generator().manual_seed(seed) + ) + trainloaders.append(DataLoader(ds_train, batch_size=batch_size, shuffle=True)) + valloaders.append(DataLoader(ds_val, batch_size=batch_size)) + return trainloaders, valloaders, DataLoader(testset, batch_size=batch_size) diff --git a/baselines/depthfl/depthfl/dataset_preparation.py b/baselines/depthfl/depthfl/dataset_preparation.py new file mode 100644 index 000000000000..006491c7679e --- /dev/null +++ b/baselines/depthfl/depthfl/dataset_preparation.py @@ -0,0 +1,125 @@ +"""Dataset(CIFAR100) preparation for DepthFL.""" + +from typing import List, Optional, Tuple + +import numpy as np +import torchvision.transforms as transforms +from torch.utils.data import Dataset, Subset +from torchvision.datasets import CIFAR100 + + +def _download_data() -> Tuple[Dataset, Dataset]: + """Download (if necessary) and returns the CIFAR-100 dataset. + + Returns + ------- + Tuple[CIFAR100, CIFAR100] + The dataset for training and the dataset for testing CIFAR100. + """ + transform_train = transforms.Compose( + [ + transforms.ToTensor(), + transforms.RandomCrop(32, padding=4), + transforms.RandomHorizontalFlip(), + transforms.Normalize((0.5071, 0.4867, 0.4408), (0.2675, 0.2565, 0.2761)), + ] + ) + + transform_test = transforms.Compose( + [ + transforms.ToTensor(), + transforms.Normalize((0.5071, 0.4867, 0.4408), (0.2675, 0.2565, 0.2761)), + ] + ) + + trainset = CIFAR100( + "./dataset", train=True, download=True, transform=transform_train + ) + testset = CIFAR100( + "./dataset", train=False, download=True, transform=transform_test + ) + return trainset, testset + + +def _partition_data( + num_clients, + iid: Optional[bool] = True, + beta=0.5, + seed=41, +) -> Tuple[List[Dataset], Dataset]: + """Split training set to simulate the federated setting. + + Parameters + ---------- + num_clients : int + The number of clients that hold a part of the data + iid : bool, optional + Whether the data should be independent and identically distributed + or if the data should first be sorted by labels and distributed by + noniid manner to each client, by default true + beta : hyperparameter for dirichlet distribution + seed : int, optional + Used to set a fix seed to replicate experiments, by default 42 + + Returns + ------- + Tuple[List[Dataset], Dataset] + A list of dataset for each client and a + single dataset to be use for testing the model. + """ + trainset, testset = _download_data() + + datasets: List[Subset] = [] + + if iid: + distribute_iid(num_clients, seed, trainset, datasets) + + else: + distribute_noniid(num_clients, beta, seed, trainset, datasets) + + return datasets, testset + + +def distribute_iid(num_clients, seed, trainset, datasets): + """Distribute dataset in iid manner.""" + np.random.seed(seed) + num_sample = int(len(trainset) / (num_clients)) + index = list(range(len(trainset))) + for _ in range(num_clients): + sample_idx = np.random.choice(index, num_sample, replace=False) + index = list(set(index) - set(sample_idx)) + datasets.append(Subset(trainset, sample_idx)) + + +def distribute_noniid(num_clients, beta, seed, trainset, datasets): + """Distribute dataset in non-iid manner.""" + labels = np.array([label for _, label in trainset]) + min_size = 0 + np.random.seed(seed) + + while min_size < 10: + idx_batch = [[] for _ in range(num_clients)] + # for each class in the dataset + for k in range(np.max(labels) + 1): + idx_k = np.where(labels == k)[0] + np.random.shuffle(idx_k) + proportions = np.random.dirichlet(np.repeat(beta, num_clients)) + # Balance + proportions = np.array( + [ + p * (len(idx_j) < labels.shape[0] / num_clients) + for p, idx_j in zip(proportions, idx_batch) + ] + ) + proportions = proportions / proportions.sum() + proportions = (np.cumsum(proportions) * len(idx_k)).astype(int)[:-1] + idx_batch = [ + idx_j + idx.tolist() + for idx_j, idx in zip(idx_batch, np.split(idx_k, proportions)) + ] + min_size = min([len(idx_j) for idx_j in idx_batch]) + + for j in range(num_clients): + np.random.shuffle(idx_batch[j]) + # net_dataidx_map[j] = np.array(idx_batch[j]) + datasets.append(Subset(trainset, np.array(idx_batch[j]))) diff --git a/baselines/depthfl/depthfl/main.py b/baselines/depthfl/depthfl/main.py new file mode 100644 index 000000000000..7bf1d9563eae --- /dev/null +++ b/baselines/depthfl/depthfl/main.py @@ -0,0 +1,135 @@ +"""DepthFL main.""" + +import copy + +import flwr as fl +import hydra +from flwr.common import ndarrays_to_parameters +from flwr.server.client_manager import SimpleClientManager +from hydra.core.hydra_config import HydraConfig +from hydra.utils import instantiate +from omegaconf import DictConfig, OmegaConf + +from depthfl import client, server +from depthfl.dataset import load_datasets +from depthfl.utils import save_results_as_pickle + + +@hydra.main(config_path="conf", config_name="config", version_base=None) +def main(cfg: DictConfig) -> None: + """Run the baseline. + + Parameters + ---------- + cfg : DictConfig + An omegaconf object that stores the hydra config. + """ + print(OmegaConf.to_yaml(cfg)) + + # partition dataset and get dataloaders + trainloaders, valloaders, testloader = load_datasets( + config=cfg.dataset_config, + num_clients=cfg.num_clients, + batch_size=cfg.batch_size, + ) + + # exclusive learning baseline in DepthFL paper + # (model_size, % of clients) = (a,100), (b,75), (c,50), (d,25) + if cfg.exclusive_learning: + cfg.num_clients = int( + cfg.num_clients - (cfg.model_size - 1) * (cfg.num_clients // 4) + ) + + models = [] + for i in range(cfg.num_clients): + model = copy.deepcopy(cfg.model) + + # each client gets different model depth / width + model.n_blocks = i // (cfg.num_clients // 4) + 1 + + # In exclusive learning, every client has same model depth / width + if cfg.exclusive_learning: + model.n_blocks = cfg.model_size + + models.append(model) + + # prepare function that will be used to spawn each client + client_fn = client.gen_client_fn( + num_epochs=cfg.num_epochs, + trainloaders=trainloaders, + valloaders=valloaders, + learning_rate=cfg.learning_rate, + learning_rate_decay=cfg.learning_rate_decay, + models=models, + ) + + # get function that will executed by the strategy's evaluate() method + # Set server's device + device = cfg.server_device + + # Static Batch Normalization for HeteroFL + if cfg.static_bn: + evaluate_fn = server.gen_evaluate_fn_hetero( + trainloaders, testloader, device=device, model_cfg=model + ) + else: + evaluate_fn = server.gen_evaluate_fn(testloader, device=device, model=model) + + # get a function that will be used to construct the config that the client's + # fit() method will received + def get_on_fit_config(): + def fit_config_fn(server_round): + # resolve and convert to python dict + fit_config = OmegaConf.to_container(cfg.fit_config, resolve=True) + fit_config["curr_round"] = server_round # add round info + return fit_config + + return fit_config_fn + + net = instantiate(cfg.model) + # instantiate strategy according to config. Here we pass other arguments + # that are only defined at run time. + strategy = instantiate( + cfg.strategy, + cfg, + net, + evaluate_fn=evaluate_fn, + on_fit_config_fn=get_on_fit_config(), + initial_parameters=ndarrays_to_parameters( + [val.cpu().numpy() for _, val in net.state_dict().items()] + ), + min_fit_clients=int(cfg.num_clients * cfg.fraction), + min_available_clients=int(cfg.num_clients * cfg.fraction), + ) + + # Start simulation + history = fl.simulation.start_simulation( + client_fn=client_fn, + num_clients=cfg.num_clients, + config=fl.server.ServerConfig(num_rounds=cfg.num_rounds), + client_resources={ + "num_cpus": cfg.client_resources.num_cpus, + "num_gpus": cfg.client_resources.num_gpus, + }, + strategy=strategy, + server=server.ServerFedDyn( + client_manager=SimpleClientManager(), strategy=strategy + ), + ) + + # Experiment completed. Now we save the results and + # generate plots using the `history` + print("................") + print(history) + + # Hydra automatically creates an output directory + # Let's retrieve it and save some results there + save_path = HydraConfig.get().runtime.output_dir + + # save results as a Python pickle using a file_path + # the directory created by Hydra for each run + save_results_as_pickle(history, file_path=save_path, extra_results={}) + + +if __name__ == "__main__": + main() diff --git a/baselines/depthfl/depthfl/models.py b/baselines/depthfl/depthfl/models.py new file mode 100644 index 000000000000..df3eebf9f9ce --- /dev/null +++ b/baselines/depthfl/depthfl/models.py @@ -0,0 +1,301 @@ +"""ResNet18 model architecutre, training, and testing functions for CIFAR100.""" + + +from typing import List, Tuple + +import torch +import torch.nn as nn +import torch.nn.functional as F +from omegaconf import DictConfig +from torch.utils.data import DataLoader + + +class KLLoss(nn.Module): + """KL divergence loss for self distillation.""" + + def __init__(self): + super().__init__() + self.temperature = 1 + + def forward(self, pred, label): + """KL loss forward.""" + predict = F.log_softmax(pred / self.temperature, dim=1) + target_data = F.softmax(label / self.temperature, dim=1) + target_data = target_data + 10 ** (-7) + with torch.no_grad(): + target = target_data.detach().clone() + + loss = ( + self.temperature + * self.temperature + * ((target * (target.log() - predict)).sum(1).sum() / target.size()[0]) + ) + return loss + + +def train( # pylint: disable=too-many-arguments + net: nn.Module, + trainloader: DataLoader, + device: torch.device, + epochs: int, + learning_rate: float, + config: dict, + consistency_weight: float, + prev_grads: dict, +) -> None: + """Train the network on the training set. + + Parameters + ---------- + net : nn.Module + The neural network to train. + trainloader : DataLoader + The DataLoader containing the data to train the network on. + device : torch.device + The device on which the model should be trained, either 'cpu' or 'cuda'. + epochs : int + The number of epochs the model should be trained for. + learning_rate : float + The learning rate for the SGD optimizer. + config : dict + training configuration + consistency_weight : float + hyperparameter for self distillation + prev_grads : dict + control variate for feddyn + """ + criterion = torch.nn.CrossEntropyLoss() + optimizer = torch.optim.SGD(net.parameters(), lr=learning_rate, weight_decay=1e-3) + global_params = { + k: val.detach().clone().flatten() for (k, val) in net.named_parameters() + } + + for k, _ in net.named_parameters(): + prev_grads[k] = prev_grads[k].to(device) + + net.train() + for _ in range(epochs): + _train_one_epoch( + net, + global_params, + trainloader, + device, + criterion, + optimizer, + config, + consistency_weight, + prev_grads, + ) + + # update prev_grads for FedDyn + if config["feddyn"]: + update_prev_grads(config, net, prev_grads, global_params) + + +def update_prev_grads(config, net, prev_grads, global_params): + """Update prev_grads for FedDyn.""" + for k, param in net.named_parameters(): + curr_param = param.detach().clone().flatten() + prev_grads[k] = prev_grads[k] - config["alpha"] * ( + curr_param - global_params[k] + ) + prev_grads[k] = prev_grads[k].to(torch.device(torch.device("cpu"))) + + +def _train_one_epoch( # pylint: disable=too-many-locals, too-many-arguments + net: nn.Module, + global_params: dict, + trainloader: DataLoader, + device: torch.device, + criterion: torch.nn.CrossEntropyLoss, + optimizer: torch.optim.SGD, + config: dict, + consistency_weight: float, + prev_grads: dict, +): + """Train for one epoch. + + Parameters + ---------- + net : nn.Module + The neural network to train. + global_params : List[Parameter] + The parameters of the global model (from the server). + trainloader : DataLoader + The DataLoader containing the data to train the network on. + device : torch.device + The device on which the model should be trained, either 'cpu' or 'cuda'. + criterion : torch.nn.CrossEntropyLoss + The loss function to use for training + optimizer : torch.optim.Adam + The optimizer to use for training + config : dict + training configuration + consistency_weight : float + hyperparameter for self distillation + prev_grads : dict + control variate for feddyn + """ + criterion_kl = KLLoss().cuda() + + for images, labels in trainloader: + images, labels = images.to(device), labels.to(device) + loss = torch.zeros(1).to(device) + optimizer.zero_grad() + output_lst = net(images) + + for i, branch_output in enumerate(output_lst): + # only trains last classifier in InclusiveFL + if not config["extended"] and i != len(output_lst) - 1: + continue + + loss += criterion(branch_output, labels) + + # self distillation term + if config["kd"] and len(output_lst) > 1: + for j, output in enumerate(output_lst): + if j == i: + continue + + loss += ( + consistency_weight + * criterion_kl(branch_output, output.detach()) + / (len(output_lst) - 1) + ) + + # Dynamic regularization in FedDyn + if config["feddyn"]: + for k, param in net.named_parameters(): + curr_param = param.flatten() + + lin_penalty = torch.dot(curr_param, prev_grads[k]) + loss -= lin_penalty + + quad_penalty = ( + config["alpha"] + / 2.0 + * torch.sum(torch.square(curr_param - global_params[k])) + ) + loss += quad_penalty + + loss.backward() + optimizer.step() + + +def test( # pylint: disable=too-many-locals + net: nn.Module, testloader: DataLoader, device: torch.device +) -> Tuple[float, float, List[float]]: + """Evaluate the network on the entire test set. + + Parameters + ---------- + net : nn.Module + The neural network to test. + testloader : DataLoader + The DataLoader containing the data to test the network on. + device : torch.device + The device on which the model should be tested, either 'cpu' or 'cuda'. + + Returns + ------- + Tuple[float, float, List[float]] + The loss and the accuracy of the global model + and the list of accuracy for each classifier on the given data. + """ + criterion = torch.nn.CrossEntropyLoss() + correct, total, loss = 0, 0, 0.0 + correct_single = [0] * 4 # accuracy of each classifier within model + net.eval() + with torch.no_grad(): + for images, labels in testloader: + images, labels = images.to(device), labels.to(device) + output_lst = net(images) + + # ensemble classfiers' output + ensemble_output = torch.stack(output_lst, dim=2) + ensemble_output = torch.sum(ensemble_output, dim=2) / len(output_lst) + + loss += criterion(ensemble_output, labels).item() + _, predicted = torch.max(ensemble_output, 1) + total += labels.size(0) + correct += (predicted == labels).sum().item() + + for i, single in enumerate(output_lst): + _, predicted = torch.max(single, 1) + correct_single[i] += (predicted == labels).sum().item() + + if len(testloader.dataset) == 0: + raise ValueError("Testloader can't be 0, exiting...") + loss /= len(testloader.dataset) + accuracy = correct / total + accuracy_single = [correct / total for correct in correct_single] + return loss, accuracy, accuracy_single + + +def test_sbn( # pylint: disable=too-many-locals + nets: List[nn.Module], + trainloaders: List[DictConfig], + testloader: DataLoader, + device: torch.device, +) -> Tuple[float, float, List[float]]: + """Evaluate the networks on the entire test set. + + Parameters + ---------- + nets : List[nn.Module] + The neural networks to test. Each neural network has different width + trainloaders : List[DataLoader] + The List of dataloaders containing the data to train the network on + testloader : DataLoader + The DataLoader containing the data to test the network on. + device : torch.device + The device on which the model should be tested, either 'cpu' or 'cuda'. + + Returns + ------- + Tuple[float, float, List[float]] + The loss and the accuracy of the global model + and the list of accuracy for each classifier on the given data. + """ + # static batch normalization + for trainloader in trainloaders: + with torch.no_grad(): + for model in nets: + model.train() + for _batch_idx, (images, labels) in enumerate(trainloader): + images, labels = images.to(device), labels.to(device) + output = model(images) + + model.eval() + + criterion = torch.nn.CrossEntropyLoss() + correct, total, loss = 0, 0, 0.0 + correct_single = [0] * 4 + + # test each network of different width + with torch.no_grad(): + for images, labels in testloader: + images, labels = images.to(device), labels.to(device) + + output_lst = [] + + for model in nets: + output_lst.append(model(images)[0]) + + output = output_lst[-1] + + loss += criterion(output, labels).item() + _, predicted = torch.max(output, 1) + total += labels.size(0) + correct += (predicted == labels).sum().item() + + for i, single in enumerate(output_lst): + _, predicted = torch.max(single, 1) + correct_single[i] += (predicted == labels).sum().item() + + if len(testloader.dataset) == 0: + raise ValueError("Testloader can't be 0, exiting...") + loss /= len(testloader.dataset) + accuracy = correct / total + accuracy_single = [correct / total for correct in correct_single] + return loss, accuracy, accuracy_single diff --git a/baselines/depthfl/depthfl/resnet.py b/baselines/depthfl/depthfl/resnet.py new file mode 100644 index 000000000000..04348ae17441 --- /dev/null +++ b/baselines/depthfl/depthfl/resnet.py @@ -0,0 +1,386 @@ +"""ResNet18 for DepthFL.""" + +import torch.nn as nn + + +class MyGroupNorm(nn.Module): + """Group Normalization layer.""" + + def __init__(self, num_channels): + super().__init__() + # change num_groups to 32 + self.norm = nn.GroupNorm( + num_groups=16, num_channels=num_channels, eps=1e-5, affine=True + ) + + def forward(self, x): + """GN forward.""" + x = self.norm(x) + return x + + +class MyBatchNorm(nn.Module): + """Batch Normalization layer.""" + + def __init__(self, num_channels): + super().__init__() + self.norm = nn.BatchNorm2d(num_channels, track_running_stats=True) + + def forward(self, x): + """BN forward.""" + x = self.norm(x) + return x + + +def conv3x3(in_planes, out_planes, stride=1): + """Convolution layer 3x3.""" + return nn.Conv2d( + in_planes, out_planes, kernel_size=3, stride=stride, padding=1, bias=False + ) + + +def conv1x1(in_planes, planes, stride=1): + """Convolution layer 1x1.""" + return nn.Conv2d(in_planes, planes, kernel_size=1, stride=stride, bias=False) + + +class SepConv(nn.Module): + """Bottleneck layer module.""" + + def __init__( # pylint: disable=too-many-arguments + self, + channel_in, + channel_out, + kernel_size=3, + stride=2, + padding=1, + norm_layer=MyGroupNorm, + ): + super().__init__() + self.operations = nn.Sequential( + nn.Conv2d( + channel_in, + channel_in, + kernel_size=kernel_size, + stride=stride, + padding=padding, + groups=channel_in, + bias=False, + ), + nn.Conv2d(channel_in, channel_in, kernel_size=1, padding=0, bias=False), + norm_layer(channel_in), + nn.ReLU(inplace=False), + nn.Conv2d( + channel_in, + channel_in, + kernel_size=kernel_size, + stride=1, + padding=padding, + groups=channel_in, + bias=False, + ), + nn.Conv2d(channel_in, channel_out, kernel_size=1, padding=0, bias=False), + norm_layer(channel_out), + nn.ReLU(inplace=False), + ) + + def forward(self, x): + """SepConv forward.""" + return self.operations(x) + + +class BasicBlock(nn.Module): + """Basic Block for ResNet18.""" + + expansion = 1 + + def __init__( + self, inplanes, planes, stride=1, downsample=None, norm_layer=None + ): # pylint: disable=too-many-arguments + super().__init__() + self.conv1 = conv3x3(inplanes, planes, stride) + self.bn1 = norm_layer(planes) + self.relu = nn.ReLU(inplace=True) + self.conv2 = conv3x3(planes, planes) + self.bn2 = norm_layer(planes) + self.downsample = downsample + self.stride = stride + + def forward(self, x): + """BasicBlock forward.""" + residual = x + + output = self.conv1(x) + output = self.bn1(output) + output = self.relu(output) + + output = self.conv2(output) + output = self.bn2(output) + + if self.downsample is not None: + residual = self.downsample(x) + + output += residual + output = self.relu(output) + return output + + +class MultiResnet(nn.Module): # pylint: disable=too-many-instance-attributes + """Resnet model. + + Args: + block (class): block type, BasicBlock or BottleneckBlock + layers (int list): layer num in each block + n_blocks (int) : Depth of network + num_classes (int): class num. + norm_layer (class): type of normalization layer. + """ + + def __init__( # pylint: disable=too-many-arguments + self, + block, + layers, + n_blocks, + num_classes=1000, + norm_layer=MyBatchNorm, + ): + super().__init__() + self.n_blocks = n_blocks + self.inplanes = 64 + self.norm_layer = norm_layer + self.conv1 = nn.Conv2d( + 3, self.inplanes, kernel_size=3, stride=1, padding=1, bias=False + ) + self.bn1 = norm_layer(self.inplanes) + + self.relu = nn.ReLU(inplace=True) + # self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) + + self.layer1 = self._make_layer(block, 64, layers[0]) + + self.middle_fc1 = nn.Linear(512 * block.expansion, num_classes) + # self.feature_fc1 = nn.Linear(512 * block.expansion, 512 * block.expansion) + self.scala1 = nn.Sequential( + SepConv( + channel_in=64 * block.expansion, + channel_out=128 * block.expansion, + norm_layer=norm_layer, + ), + SepConv( + channel_in=128 * block.expansion, + channel_out=256 * block.expansion, + norm_layer=norm_layer, + ), + SepConv( + channel_in=256 * block.expansion, + channel_out=512 * block.expansion, + norm_layer=norm_layer, + ), + nn.AdaptiveAvgPool2d(1), + ) + + self.attention1 = nn.Sequential( + SepConv( + channel_in=64 * block.expansion, + channel_out=64 * block.expansion, + norm_layer=norm_layer, + ), + norm_layer(64 * block.expansion), + nn.ReLU(), + nn.Upsample(scale_factor=2, mode="bilinear", align_corners=False), + nn.Sigmoid(), + ) + + if n_blocks > 1: + self.layer2 = self._make_layer(block, 128, layers[1], stride=2) + self.middle_fc2 = nn.Linear(512 * block.expansion, num_classes) + # self.feature_fc2 = nn.Linear(512 * block.expansion, 512 * block.expansion) + self.scala2 = nn.Sequential( + SepConv( + channel_in=128 * block.expansion, + channel_out=256 * block.expansion, + norm_layer=norm_layer, + ), + SepConv( + channel_in=256 * block.expansion, + channel_out=512 * block.expansion, + norm_layer=norm_layer, + ), + nn.AdaptiveAvgPool2d(1), + ) + self.attention2 = nn.Sequential( + SepConv( + channel_in=128 * block.expansion, + channel_out=128 * block.expansion, + norm_layer=norm_layer, + ), + norm_layer(128 * block.expansion), + nn.ReLU(), + nn.Upsample(scale_factor=2, mode="bilinear", align_corners=False), + nn.Sigmoid(), + ) + + if n_blocks > 2: + self.layer3 = self._make_layer(block, 256, layers[2], stride=2) + self.middle_fc3 = nn.Linear(512 * block.expansion, num_classes) + # self.feature_fc3 = nn.Linear(512 * block.expansion, 512 * block.expansion) + self.scala3 = nn.Sequential( + SepConv( + channel_in=256 * block.expansion, + channel_out=512 * block.expansion, + norm_layer=norm_layer, + ), + nn.AdaptiveAvgPool2d(1), + ) + self.attention3 = nn.Sequential( + SepConv( + channel_in=256 * block.expansion, + channel_out=256 * block.expansion, + norm_layer=norm_layer, + ), + norm_layer(256 * block.expansion), + nn.ReLU(), + nn.Upsample(scale_factor=2, mode="bilinear", align_corners=False), + nn.Sigmoid(), + ) + + if n_blocks > 3: + self.layer4 = self._make_layer(block, 512, layers[3], stride=2) + self.fc_layer = nn.Linear(512 * block.expansion, num_classes) + self.scala4 = nn.AdaptiveAvgPool2d(1) + + for module in self.modules(): + if isinstance(module, nn.Conv2d): + nn.init.kaiming_normal_( + module.weight, mode="fan_out", nonlinearity="relu" + ) + elif isinstance(module, (nn.BatchNorm2d, nn.GroupNorm)): + nn.init.constant_(module.weight, 1) + nn.init.constant_(module.bias, 0) + + def _make_layer( + self, block, planes, layers, stride=1, norm_layer=None + ): # pylint: disable=too-many-arguments + """Create a block with layers. + + Args: + block (class): block type + planes (int): output channels = planes * expansion + layers (int): layer num in the block + stride (int): the first layer stride in the block. + norm_layer (class): type of normalization layer. + """ + norm_layer = self.norm_layer + downsample = None + if stride != 1 or self.inplanes != planes * block.expansion: + downsample = nn.Sequential( + conv1x1(self.inplanes, planes * block.expansion, stride), + norm_layer(planes * block.expansion), + ) + layer = [] + layer.append( + block( + self.inplanes, + planes, + stride=stride, + downsample=downsample, + norm_layer=norm_layer, + ) + ) + self.inplanes = planes * block.expansion + for _i in range(1, layers): + layer.append(block(self.inplanes, planes, norm_layer=norm_layer)) + + return nn.Sequential(*layer) + + def forward(self, x): + """Resnet forward.""" + x = self.conv1(x) + x = self.bn1(x) + x = self.relu(x) + # x = self.maxpool(x) + + x = self.layer1(x) + fea1 = self.attention1(x) + fea1 = fea1 * x + out1_feature = self.scala1(fea1).view(x.size(0), -1) + middle_output1 = self.middle_fc1(out1_feature) + # out1_feature = self.feature_fc1(out1_feature) + + if self.n_blocks == 1: + return [middle_output1] + + x = self.layer2(x) + fea2 = self.attention2(x) + fea2 = fea2 * x + out2_feature = self.scala2(fea2).view(x.size(0), -1) + middle_output2 = self.middle_fc2(out2_feature) + # out2_feature = self.feature_fc2(out2_feature) + if self.n_blocks == 2: + return [middle_output1, middle_output2] + + x = self.layer3(x) + fea3 = self.attention3(x) + fea3 = fea3 * x + out3_feature = self.scala3(fea3).view(x.size(0), -1) + middle_output3 = self.middle_fc3(out3_feature) + # out3_feature = self.feature_fc3(out3_feature) + + if self.n_blocks == 3: + return [middle_output1, middle_output2, middle_output3] + + x = self.layer4(x) + out4_feature = self.scala4(x).view(x.size(0), -1) + output4 = self.fc_layer(out4_feature) + + return [middle_output1, middle_output2, middle_output3, output4] + + +def multi_resnet18(n_blocks=1, norm="bn", num_classes=100): + """Create resnet18 for HeteroFL. + + Parameters + ---------- + n_blocks: int + depth of network + norm: str + normalization layer type + num_classes: int + # of labels + + Returns + ------- + Callable [ [nn.Module,List[int],int,int,nn.Module], nn.Module] + """ + if norm == "gn": + norm_layer = MyGroupNorm + + elif norm == "bn": + norm_layer = MyBatchNorm + + return MultiResnet( + BasicBlock, + [2, 2, 2, 2], + n_blocks, + num_classes=num_classes, + norm_layer=norm_layer, + ) + + +# if __name__ == "__main__": +# from ptflops import get_model_complexity_info + +# model = MultiResnet18(n_blocks=4, num_classes=100) + +# with torch.cuda.device(0): +# macs, params = get_model_complexity_info( +# model, +# (3, 32, 32), +# as_strings=True, +# print_per_layer_stat=False, +# verbose=True, +# units="MMac", +# ) + +# print("{:<30} {:<8}".format("Computational complexity: ", macs)) +# print("{:<30} {:<8}".format("Number of parameters: ", params)) diff --git a/baselines/depthfl/depthfl/resnet_hetero.py b/baselines/depthfl/depthfl/resnet_hetero.py new file mode 100644 index 000000000000..a84c07b881b2 --- /dev/null +++ b/baselines/depthfl/depthfl/resnet_hetero.py @@ -0,0 +1,280 @@ +"""ResNet18 for HeteroFL.""" + +import numpy as np +import torch.nn as nn + + +class Scaler(nn.Module): + """Scaler module for HeteroFL.""" + + def __init__(self, rate, scale): + super().__init__() + if scale: + self.rate = rate + else: + self.rate = 1 + + def forward(self, x): + """Scaler forward.""" + output = x / self.rate if self.training else x + return output + + +class MyBatchNorm(nn.Module): + """Static Batch Normalization for HeteroFL.""" + + def __init__(self, num_channels, track=True): + super().__init__() + # change num_groups to 32 + self.norm = nn.BatchNorm2d(num_channels, track_running_stats=track) + + def forward(self, x): + """BatchNorm forward.""" + x = self.norm(x) + return x + + +def conv3x3(in_planes, out_planes, stride=1): + """Convolution layer 3x3.""" + return nn.Conv2d( + in_planes, out_planes, kernel_size=3, stride=stride, padding=1, bias=False + ) + + +def conv1x1(in_planes, planes, stride=1): + """Convolution layer 1x1.""" + return nn.Conv2d(in_planes, planes, kernel_size=1, stride=stride, bias=False) + + +class BasicBlock(nn.Module): # pylint: disable=too-many-instance-attributes + """Basic Block for ResNet18.""" + + expansion = 1 + + def __init__( # pylint: disable=too-many-arguments + self, + inplanes, + planes, + stride=1, + scaler_rate=1, + downsample=None, + track=True, + scale=True, + ): + super().__init__() + self.conv1 = conv3x3(inplanes, planes, stride) + self.scaler = Scaler(scaler_rate, scale) + self.bn1 = MyBatchNorm(planes, track) + self.relu = nn.ReLU(inplace=True) + self.conv2 = conv3x3(planes, planes) + self.bn2 = MyBatchNorm(planes, track) + self.downsample = downsample + self.stride = stride + + def forward(self, x): + """BasicBlock forward.""" + residual = x + + output = self.conv1(x) + output = self.scaler(output) + output = self.bn1(output) + output = self.relu(output) + + output = self.conv2(output) + output = self.scaler(output) + output = self.bn2(output) + + if self.downsample is not None: + residual = self.downsample(x) + + output += residual + output = self.relu(output) + return output + + +class Resnet(nn.Module): # pylint: disable=too-many-instance-attributes + """Resnet model.""" + + def __init__( # pylint: disable=too-many-arguments + self, hidden_size, block, layers, num_classes, scaler_rate, track, scale + ): + super().__init__() + + self.inplanes = hidden_size[0] + self.norm_layer = MyBatchNorm + self.conv1 = nn.Conv2d( + 3, self.inplanes, kernel_size=3, stride=1, padding=1, bias=False + ) + self.scaler = Scaler(scaler_rate, scale) + self.bn1 = self.norm_layer(self.inplanes, track) + + self.relu = nn.ReLU(inplace=True) + # self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) + + self.layer1 = self._make_layer( + block, + hidden_size[0], + layers[0], + scaler_rate=scaler_rate, + track=track, + scale=scale, + ) + self.layer2 = self._make_layer( + block, + hidden_size[1], + layers[1], + stride=2, + scaler_rate=scaler_rate, + track=track, + scale=scale, + ) + self.layer3 = self._make_layer( + block, + hidden_size[2], + layers[2], + stride=2, + scaler_rate=scaler_rate, + track=track, + scale=scale, + ) + self.layer4 = self._make_layer( + block, + hidden_size[3], + layers[3], + stride=2, + scaler_rate=scaler_rate, + track=track, + scale=scale, + ) + self.fc_layer = nn.Linear(hidden_size[3] * block.expansion, num_classes) + self.scala = nn.AdaptiveAvgPool2d(1) + + for module in self.modules(): + if isinstance(module, nn.Conv2d): + nn.init.kaiming_normal_( + module.weight, mode="fan_out", nonlinearity="relu" + ) + elif isinstance(module, (nn.BatchNorm2d, nn.GroupNorm)): + nn.init.constant_(module.weight, 1) + nn.init.constant_(module.bias, 0) + + def _make_layer( # pylint: disable=too-many-arguments + self, block, planes, layers, stride=1, scaler_rate=1, track=True, scale=True + ): + """Create a block with layers. + + Args: + block (class): block type + planes (int): output channels = planes * expansion + layers (int): layer num in the block + stride (int): the first layer stride in the block. + scaler_rate (float): for scaler module + track (bool): static batch normalization + scale (bool): for scaler module. + """ + norm_layer = self.norm_layer + downsample = None + if stride != 1 or self.inplanes != planes * block.expansion: + downsample = nn.Sequential( + conv1x1(self.inplanes, planes * block.expansion, stride), + norm_layer(planes * block.expansion, track), + ) + layer = [] + layer.append( + block( + self.inplanes, + planes, + stride=stride, + scaler_rate=scaler_rate, + downsample=downsample, + track=track, + scale=scale, + ) + ) + self.inplanes = planes * block.expansion + for _i in range(1, layers): + layer.append( + block( + self.inplanes, + planes, + scaler_rate=scaler_rate, + track=track, + scale=scale, + ) + ) + + return nn.Sequential(*layer) + + def forward(self, x): + """Resnet forward.""" + x = self.conv1(x) + x = self.scaler(x) + x = self.bn1(x) + x = self.relu(x) + # x = self.maxpool(x) + + x = self.layer1(x) + x = self.layer2(x) + x = self.layer3(x) + x = self.layer4(x) + out = self.scala(x).view(x.size(0), -1) + out = self.fc_layer(out) + + return [out] + + +def resnet18(n_blocks=4, track=False, scale=True, num_classes=100): + """Create resnet18 for HeteroFL. + + Parameters + ---------- + n_blocks: int + corresponds to width (divided by 4) + track: bool + static batch normalization + scale: bool + scaler module + num_classes: int + # of labels + + Returns + ------- + Callable [ [List[int],nn.Module,List[int],int,float,bool,bool], nn.Module] + """ + # width pruning ratio : (0.25, 0.50, 0.75, 0.10) + model_rate = n_blocks / 4 + classes_size = num_classes + + hidden_size = [64, 128, 256, 512] + hidden_size = [int(np.ceil(model_rate * x)) for x in hidden_size] + + scaler_rate = model_rate + + return Resnet( + hidden_size, + BasicBlock, + [2, 2, 2, 2], + num_classes=classes_size, + scaler_rate=scaler_rate, + track=track, + scale=scale, + ) + + +# if __name__ == "__main__": +# from ptflops import get_model_complexity_info + +# model = resnet18(100, 1.0) + +# with torch.cuda.device(0): +# macs, params = get_model_complexity_info( +# model, +# (3, 32, 32), +# as_strings=True, +# print_per_layer_stat=False, +# verbose=True, +# units="MMac", +# ) + +# print("{:<30} {:<8}".format("Computational complexity: ", macs)) +# print("{:<30} {:<8}".format("Number of parameters: ", params)) diff --git a/baselines/depthfl/depthfl/server.py b/baselines/depthfl/depthfl/server.py new file mode 100644 index 000000000000..dc99ae2fc5de --- /dev/null +++ b/baselines/depthfl/depthfl/server.py @@ -0,0 +1,209 @@ +"""Server for DepthFL baseline.""" + +import copy +from collections import OrderedDict +from logging import DEBUG, INFO +from typing import Callable, Dict, List, Optional, Tuple, Union + +import torch +from flwr.common import FitRes, Parameters, Scalar, parameters_to_ndarrays +from flwr.common.logger import log +from flwr.common.typing import NDArrays +from flwr.server.client_proxy import ClientProxy +from flwr.server.server import Server, fit_clients +from hydra.utils import instantiate +from omegaconf import DictConfig +from torch.utils.data import DataLoader + +from depthfl.client import prune +from depthfl.models import test, test_sbn +from depthfl.strategy import aggregate_fit_depthfl +from depthfl.strategy_hetero import aggregate_fit_hetero + +FitResultsAndFailures = Tuple[ + List[Tuple[ClientProxy, FitRes]], + List[Union[Tuple[ClientProxy, FitRes], BaseException]], +] + + +def gen_evaluate_fn( + testloader: DataLoader, + device: torch.device, + model: DictConfig, +) -> Callable[ + [int, NDArrays, Dict[str, Scalar]], + Tuple[float, Dict[str, Union[Scalar, List[float]]]], +]: + """Generate the function for centralized evaluation. + + Parameters + ---------- + testloader : DataLoader + The dataloader to test the model with. + device : torch.device + The device to test the model on. + model : DictConfig + model configuration for instantiating + + Returns + ------- + Callable[ [int, NDArrays, Dict[str, Scalar]], + Optional[Tuple[float, Dict[str, Scalar]]] ] + The centralized evaluation function. + """ + + def evaluate( + server_round: int, parameters_ndarrays: NDArrays, config: Dict[str, Scalar] + ) -> Tuple[float, Dict[str, Union[Scalar, List[float]]]]: + # pylint: disable=unused-argument + """Use the entire CIFAR-100 test set for evaluation.""" + net = instantiate(model) + params_dict = zip(net.state_dict().keys(), parameters_ndarrays) + state_dict = OrderedDict({k: torch.tensor(v) for k, v in params_dict}) + net.load_state_dict(state_dict, strict=True) + net.to(device) + + loss, accuracy, accuracy_single = test(net, testloader, device=device) + # return statistics + return loss, {"accuracy": accuracy, "accuracy_single": accuracy_single} + + return evaluate + + +def gen_evaluate_fn_hetero( + trainloaders: List[DataLoader], + testloader: DataLoader, + device: torch.device, + model_cfg: DictConfig, +) -> Callable[ + [int, NDArrays, Dict[str, Scalar]], + Tuple[float, Dict[str, Union[Scalar, List[float]]]], +]: + """Generate the function for centralized evaluation. + + Parameters + ---------- + trainloaders : List[DataLoader] + The list of dataloaders to calculate statistics for BN + testloader : DataLoader + The dataloader to test the model with. + device : torch.device + The device to test the model on. + model_cfg : DictConfig + model configuration for instantiating + + Returns + ------- + Callable[ [int, NDArrays, Dict[str, Scalar]], + Optional[Tuple[float, Dict[str, Scalar]]] ] + The centralized evaluation function. + """ + + def evaluate( # pylint: disable=too-many-locals + server_round: int, parameters_ndarrays: NDArrays, config: Dict[str, Scalar] + ) -> Tuple[float, Dict[str, Union[Scalar, List[float]]]]: + # pylint: disable=unused-argument + """Use the entire CIFAR-100 test set for evaluation.""" + # test per 50 rounds (sbn takes a long time) + if server_round % 50 != 0: + return 0.0, {"accuracy": 0.0, "accuracy_single": [0] * 4} + + # models with different width + models = [] + for i in range(4): + model_tmp = copy.deepcopy(model_cfg) + model_tmp.n_blocks = i + 1 + models.append(model_tmp) + + # load global parameters + param_idx_lst = [] + nets = [] + net_tmp = instantiate(models[-1], track=False) + for model in models: + net = instantiate(model, track=True, scale=False) + nets.append(net) + param_idx = {} + for k in net_tmp.state_dict().keys(): + param_idx[k] = [ + torch.arange(size) for size in net.state_dict()[k].shape + ] + param_idx_lst.append(param_idx) + + params_dict = zip(net_tmp.state_dict().keys(), parameters_ndarrays) + state_dict = OrderedDict({k: torch.tensor(v) for k, v in params_dict}) + + for net, param_idx in zip(nets, param_idx_lst): + net.load_state_dict(prune(state_dict, param_idx), strict=False) + net.to(device) + net.train() + + loss, accuracy, accuracy_single = test_sbn( + nets, trainloaders, testloader, device=device + ) + # return statistics + return loss, {"accuracy": accuracy, "accuracy_single": accuracy_single} + + return evaluate + + +class ServerFedDyn(Server): + """Sever for FedDyn.""" + + def fit_round( + self, + server_round: int, + timeout: Optional[float], + ) -> Optional[ + Tuple[Optional[Parameters], Dict[str, Scalar], FitResultsAndFailures] + ]: + """Perform a single round.""" + # Get clients and their respective instructions from strategy + client_instructions = self.strategy.configure_fit( + server_round=server_round, + parameters=self.parameters, + client_manager=self._client_manager, + ) + + if not client_instructions: + log(INFO, "fit_round %s: no clients selected, cancel", server_round) + return None + log( + DEBUG, + "fit_round %s: strategy sampled %s clients (out of %s)", + server_round, + len(client_instructions), + self._client_manager.num_available(), + ) + + # Collect `fit` results from all clients participating in this round + results, failures = fit_clients( + client_instructions=client_instructions, + max_workers=self.max_workers, + timeout=timeout, + ) + log( + DEBUG, + "fit_round %s received %s results and %s failures", + server_round, + len(results), + len(failures), + ) + + if "HeteroFL" in str(type(self.strategy)): + aggregate_fit = aggregate_fit_hetero + else: + aggregate_fit = aggregate_fit_depthfl + + aggregated_result: Tuple[ + Optional[Parameters], + Dict[str, Scalar], + ] = aggregate_fit( + self.strategy, + server_round, + results, + failures, + parameters_to_ndarrays(self.parameters), + ) + + parameters_aggregated, metrics_aggregated = aggregated_result + return parameters_aggregated, metrics_aggregated, (results, failures) diff --git a/baselines/depthfl/depthfl/strategy.py b/baselines/depthfl/depthfl/strategy.py new file mode 100644 index 000000000000..3414c28c4518 --- /dev/null +++ b/baselines/depthfl/depthfl/strategy.py @@ -0,0 +1,136 @@ +"""Strategy for DepthFL.""" + +import os +import pickle +from logging import WARNING +from typing import Dict, List, Optional, Tuple, Union + +import numpy as np +import torch +import torch.nn as nn +from flwr.common import ( + NDArrays, + Parameters, + Scalar, + ndarrays_to_parameters, + parameters_to_ndarrays, +) +from flwr.common.logger import log +from flwr.common.typing import FitRes +from flwr.server.client_proxy import ClientProxy +from flwr.server.strategy import FedAvg +from omegaconf import DictConfig + + +class FedDyn(FedAvg): + """Applying dynamic regularization in FedDyn paper.""" + + def __init__(self, cfg: DictConfig, net: nn.Module, *args, **kwargs): + self.cfg = cfg + self.h_variate = [np.zeros(v.shape) for (k, v) in net.state_dict().items()] + + # tagging real weights / biases + self.is_weight = [] + for k in net.state_dict().keys(): + if "weight" not in k and "bias" not in k: + self.is_weight.append(False) + else: + self.is_weight.append(True) + + # prev_grads file for each client + prev_grads = [ + {k: torch.zeros(v.numel()) for (k, v) in net.named_parameters()} + ] * cfg.num_clients + + if not os.path.exists("prev_grads"): + os.makedirs("prev_grads") + + for idx in range(cfg.num_clients): + with open(f"prev_grads/client_{idx}", "wb") as prev_grads_file: + pickle.dump(prev_grads[idx], prev_grads_file) + + super().__init__(*args, **kwargs) + + +def aggregate_fit_depthfl( + strategy, + server_round: int, + results: List[Tuple[ClientProxy, FitRes]], + failures: List[Union[Tuple[ClientProxy, FitRes], BaseException]], + origin: NDArrays, +) -> Tuple[Optional[Parameters], Dict[str, Scalar]]: + """Aggregate fit results using weighted average.""" + if not results: + return None, {} + # Do not aggregate if there are failures and failures are not accepted + if not strategy.accept_failures and failures: + return None, {} + + # Convert results + weights_results = [ + (parameters_to_ndarrays(fit_res.parameters), fit_res.num_examples) + for _, fit_res in results + ] + parameters_aggregated = ndarrays_to_parameters( + aggregate( + weights_results, + origin, + strategy.h_variate, + strategy.is_weight, + strategy.cfg, + ) + ) + + # Aggregate custom metrics if aggregation fn was provided + metrics_aggregated = {} + if strategy.fit_metrics_aggregation_fn: + fit_metrics = [(res.num_examples, res.metrics) for _, res in results] + metrics_aggregated = strategy.fit_metrics_aggregation_fn(fit_metrics) + elif server_round == 1: # Only log this warning once + log(WARNING, "No fit_metrics_aggregation_fn provided") + + return parameters_aggregated, metrics_aggregated + + +def aggregate( + results: List[Tuple[NDArrays, int]], + origin: NDArrays, + h_list: List, + is_weight: List, + cfg: DictConfig, +) -> NDArrays: + """Aggregate model parameters with different depths.""" + param_count = [0] * len(origin) + weights_sum = [np.zeros(v.shape) for v in origin] + + # summation & counting of parameters + for parameters, _ in results: + for i, layer in enumerate(parameters): + weights_sum[i] += layer + param_count[i] += 1 + + # update parameters + for i, weight in enumerate(weights_sum): + if param_count[i] > 0: + weight = weight / param_count[i] + # print(np.isscalar(weight)) + + # update h variable for FedDyn + h_list[i] = ( + h_list[i] + - cfg.fit_config.alpha + * param_count[i] + * (weight - origin[i]) + / cfg.num_clients + ) + + # applying h only for weights / biases + if is_weight[i] and cfg.fit_config.feddyn: + weights_sum[i] = weight - h_list[i] / cfg.fit_config.alpha + else: + weights_sum[i] = weight + + else: + weights_sum[i] = origin[i] + + return weights_sum diff --git a/baselines/depthfl/depthfl/strategy_hetero.py b/baselines/depthfl/depthfl/strategy_hetero.py new file mode 100644 index 000000000000..7544204cde2f --- /dev/null +++ b/baselines/depthfl/depthfl/strategy_hetero.py @@ -0,0 +1,136 @@ +"""Strategy for HeteroFL.""" + +import os +import pickle +from logging import WARNING +from typing import Dict, List, Optional, Tuple, Union + +import numpy as np +import torch +import torch.nn as nn +from flwr.common import ( + NDArrays, + Parameters, + Scalar, + ndarrays_to_parameters, + parameters_to_ndarrays, +) +from flwr.common.logger import log +from flwr.common.typing import FitRes +from flwr.server.client_proxy import ClientProxy +from flwr.server.strategy import FedAvg +from hydra.utils import instantiate +from omegaconf import DictConfig + + +class HeteroFL(FedAvg): + """Custom FedAvg for HeteroFL.""" + + def __init__(self, cfg: DictConfig, net: nn.Module, *args, **kwargs): + self.cfg = cfg + self.parameters = [np.zeros(v.shape) for (k, v) in net.state_dict().items()] + self.param_idx_lst = [] + + model = cfg.model + # store parameter shapes of different width + for i in range(4): + model.n_blocks = i + 1 + net_tmp = instantiate(model) + param_idx = [] + for k in net_tmp.state_dict().keys(): + param_idx.append( + [torch.arange(size) for size in net_tmp.state_dict()[k].shape] + ) + + # print(net_tmp.state_dict()['conv1.weight'].shape[0]) + self.param_idx_lst.append(param_idx) + + self.is_weight = [] + + # tagging real weights / biases + for k in net.state_dict().keys(): + if "num" in k: + self.is_weight.append(False) + else: + self.is_weight.append(True) + + # prev_grads file for each client + prev_grads = [ + {k: torch.zeros(v.numel()) for (k, v) in net.named_parameters()} + ] * cfg.num_clients + + if not os.path.exists("prev_grads"): + os.makedirs("prev_grads") + + for idx in range(cfg.num_clients): + with open(f"prev_grads/client_{idx}", "wb") as prev_grads_file: + pickle.dump(prev_grads[idx], prev_grads_file) + + super().__init__(*args, **kwargs) + + def aggregate_hetero( + self, results: List[Tuple[NDArrays, Union[bool, bytes, float, int, str]]] + ): + """Aggregate function for HeteroFL.""" + for i, params in enumerate(self.parameters): + count = np.zeros(params.shape) + tmp_v = np.zeros(params.shape) + if self.is_weight[i]: + for weights, cid in results: + if self.cfg.exclusive_learning: + cid = self.cfg.model_size * (self.cfg.num_clients // 4) - 1 + + tmp_v[ + torch.meshgrid( + self.param_idx_lst[cid // (self.cfg.num_clients // 4)][i] + ) + ] += weights[i] + count[ + torch.meshgrid( + self.param_idx_lst[cid // (self.cfg.num_clients // 4)][i] + ) + ] += 1 + tmp_v[count > 0] = np.divide(tmp_v[count > 0], count[count > 0]) + params[count > 0] = tmp_v[count > 0] + + else: + for weights, _ in results: + tmp_v += weights[i] + count += 1 + tmp_v = np.divide(tmp_v, count) + params = tmp_v + + +def aggregate_fit_hetero( + strategy, + server_round: int, + results: List[Tuple[ClientProxy, FitRes]], + failures: List[Union[Tuple[ClientProxy, FitRes], BaseException]], + origin: NDArrays, +) -> Tuple[Optional[Parameters], Dict[str, Scalar]]: + """Aggregate fit results using weighted average.""" + if not results: + return None, {} + # Do not aggregate if there are failures and failures are not accepted + if not strategy.accept_failures and failures: + return None, {} + + # Convert results + weights_results = [ + (parameters_to_ndarrays(fit_res.parameters), fit_res.metrics["cid"]) + for _, fit_res in results + ] + + strategy.parameters = origin + strategy.aggregate_hetero(weights_results) + parameters_aggregated = ndarrays_to_parameters(strategy.parameters) + + # Aggregate custom metrics if aggregation fn was provided + metrics_aggregated = {} + if strategy.fit_metrics_aggregation_fn: + fit_metrics = [(res.num_examples, res.metrics) for _, res in results] + metrics_aggregated = strategy.fit_metrics_aggregation_fn(fit_metrics) + elif server_round == 1: # Only log this warning once + log(WARNING, "No fit_metrics_aggregation_fn provided") + + return parameters_aggregated, metrics_aggregated diff --git a/baselines/depthfl/depthfl/utils.py b/baselines/depthfl/depthfl/utils.py new file mode 100644 index 000000000000..fad2afcad4be --- /dev/null +++ b/baselines/depthfl/depthfl/utils.py @@ -0,0 +1,66 @@ +"""Contains utility functions for CNN FL on MNIST.""" + +import pickle +from pathlib import Path +from secrets import token_hex +from typing import Dict, Union + +from flwr.server.history import History + + +def save_results_as_pickle( + history: History, + file_path: Union[str, Path], + extra_results: Dict, + default_filename: str = "results.pkl", +) -> None: + """Save results from simulation to pickle. + + Parameters + ---------- + history: History + History returned by start_simulation. + file_path: Union[str, Path] + Path to file to create and store both history and extra_results. + If path is a directory, the default_filename will be used. + path doesn't exist, it will be created. If file exists, a + randomly generated suffix will be added to the file name. This + is done to avoid overwritting results. + extra_results : Dict + A dictionary containing additional results you would like + to be saved to disk. Default: {} (an empty dictionary) + default_filename: Optional[str] + File used by default if file_path points to a directory instead + to a file. Default: "results.pkl" + """ + path = Path(file_path) + + # ensure path exists + path.mkdir(exist_ok=True, parents=True) + + def _add_random_suffix(path_: Path): + """Add a randomly generated suffix to the file name.""" + print(f"File `{path_}` exists! ") + suffix = token_hex(4) + print(f"New results to be saved with suffix: {suffix}") + return path_.parent / (path_.stem + "_" + suffix + ".pkl") + + def _complete_path_with_default_name(path_: Path): + """Append the default file name to the path.""" + print("Using default filename") + return path_ / default_filename + + if path.is_dir(): + path = _complete_path_with_default_name(path) + + if path.is_file(): + # file exists already + path = _add_random_suffix(path) + + print(f"Results will be saved into: {path}") + + data = {"history": history, **extra_results} + + # save results to pickle + with open(str(path), "wb") as handle: + pickle.dump(data, handle, protocol=pickle.HIGHEST_PROTOCOL) diff --git a/baselines/depthfl/pyproject.toml b/baselines/depthfl/pyproject.toml new file mode 100644 index 000000000000..2f928c2d3553 --- /dev/null +++ b/baselines/depthfl/pyproject.toml @@ -0,0 +1,141 @@ +[build-system] +requires = ["poetry-core>=1.4.0"] +build-backend = "poetry.masonry.api" + +[tool.poetry] +name = "depthfl" # <----- Ensure it matches the name of your baseline directory containing all the source code +version = "1.0.0" +description = "DepthFL: Depthwise Federated Learning for Heterogeneous Clients" +license = "Apache-2.0" +authors = ["Minjae Kim "] +readme = "README.md" +homepage = "https://flower.dev" +repository = "https://github.com/adap/flower" +documentation = "https://flower.dev" +classifiers = [ + "Development Status :: 3 - Alpha", + "Intended Audience :: Developers", + "Intended Audience :: Science/Research", + "License :: OSI Approved :: Apache Software License", + "Operating System :: MacOS :: MacOS X", + "Operating System :: POSIX :: Linux", + "Programming Language :: Python", + "Programming Language :: Python :: 3", + "Programming Language :: Python :: 3 :: Only", + "Programming Language :: Python :: 3.8", + "Programming Language :: Python :: 3.9", + "Programming Language :: Python :: 3.10", + "Programming Language :: Python :: 3.11", + "Programming Language :: Python :: Implementation :: CPython", + "Topic :: Scientific/Engineering", + "Topic :: Scientific/Engineering :: Artificial Intelligence", + "Topic :: Scientific/Engineering :: Mathematics", + "Topic :: Software Development", + "Topic :: Software Development :: Libraries", + "Topic :: Software Development :: Libraries :: Python Modules", + "Typing :: Typed", +] + +[tool.poetry.dependencies] +python = ">=3.10.0, <3.11.0" +flwr = { extras = ["simulation"], version = "1.5.0" } +hydra-core = "1.3.2" # don't change this +matplotlib = "3.7.1" +torch = { url = "https://download.pytorch.org/whl/cu116/torch-1.13.1%2Bcu116-cp310-cp310-linux_x86_64.whl"} +torchvision = { url = "https://download.pytorch.org/whl/cu116/torchvision-0.14.1%2Bcu116-cp310-cp310-linux_x86_64.whl"} + + +[tool.poetry.dev-dependencies] +isort = "==5.11.5" +black = "==23.1.0" +docformatter = "==1.5.1" +mypy = "==1.4.1" +pylint = "==2.8.2" +flake8 = "==3.9.2" +pytest = "==6.2.4" +pytest-watch = "==4.2.0" +ruff = "==0.0.272" +types-requests = "==2.27.7" + +[tool.isort] +line_length = 88 +indent = " " +multi_line_output = 3 +include_trailing_comma = true +force_grid_wrap = 0 +use_parentheses = true + +[tool.black] +line-length = 88 +target-version = ["py38", "py39", "py310", "py311"] + +[tool.pytest.ini_options] +minversion = "6.2" +addopts = "-qq" +testpaths = [ + "flwr_baselines", +] + +[tool.mypy] +ignore_missing_imports = true +strict = false +plugins = "numpy.typing.mypy_plugin" + +[tool.pylint."MESSAGES CONTROL"] +disable = "bad-continuation,duplicate-code,too-few-public-methods,useless-import-alias" +good-names = "i,j,k,_,x,y,X,Y" +signature-mutators="hydra.main.main" + +[tool.pylint.typecheck] +generated-members="numpy.*, torch.*, tensorflow.*" + +[[tool.mypy.overrides]] +module = [ + "importlib.metadata.*", + "importlib_metadata.*", +] +follow_imports = "skip" +follow_imports_for_stubs = true +disallow_untyped_calls = false + +[[tool.mypy.overrides]] +module = "torch.*" +follow_imports = "skip" +follow_imports_for_stubs = true + +[tool.docformatter] +wrap-summaries = 88 +wrap-descriptions = 88 + +[tool.ruff] +target-version = "py38" +line-length = 88 +select = ["D", "E", "F", "W", "B", "ISC", "C4"] +fixable = ["D", "E", "F", "W", "B", "ISC", "C4"] +ignore = ["B024", "B027"] +exclude = [ + ".bzr", + ".direnv", + ".eggs", + ".git", + ".hg", + ".mypy_cache", + ".nox", + ".pants.d", + ".pytype", + ".ruff_cache", + ".svn", + ".tox", + ".venv", + "__pypackages__", + "_build", + "buck-out", + "build", + "dist", + "node_modules", + "venv", + "proto", +] + +[tool.ruff.pydocstyle] +convention = "numpy" diff --git a/baselines/fedper/LICENSE b/baselines/fedper/LICENSE new file mode 100644 index 000000000000..d64569567334 --- /dev/null +++ b/baselines/fedper/LICENSE @@ -0,0 +1,202 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/baselines/fedper/README.md b/baselines/fedper/README.md new file mode 100644 index 000000000000..157bc22d2da5 --- /dev/null +++ b/baselines/fedper/README.md @@ -0,0 +1,152 @@ +--- +title: Federated Learning with Personalization Layers +url: https://arxiv.org/abs/1912.00818 +labels: [system heterogeneity, image classification, personalization, horizontal data partition] +dataset: [CIFAR-10, FLICKR-AES] +--- + +# Federated Learning with Personalization Layers + +> Note: If you use this baseline in your work, please remember to cite the original authors of the paper as well as the Flower paper. + +**Paper:** [arxiv.org/abs/1912.00818](https://arxiv.org/abs/1912.00818) + +**Authors:** Manoj Ghuhan Arivazhagan, Vinay Aggarwal, Aaditya Kumar Singh, and Sunav Choudhary + +**Abstract:** The emerging paradigm of federated learning strives to enable collaborative training of machine learning models on the network edge without centrally aggregating raw data and hence, improving data privacy. This sharply deviates from traditional machine learning and necessitates design of algorithms robust to various sources of heterogeneity. Specifically, statistical heterogeneity of data across user devices can severely degrade performance of standard federated averaging for traditional machine learning applications like personalization with deep learning. This paper proposes `FedPer`, a base + personalization layer approach for federated training of deep feed forward neural networks, which can combat the ill-effects of statistical heterogeneity. We demonstrate effectiveness of `FedPer` for non-identical data partitions of CIFAR datasets and on a personalized image aesthetics dataset from Flickr. + +## About this baseline + +**What’s implemented:** The code in this directory replicates the experiments in _Federated Learning with Personalization Layers_ (Arivazhagan et al., 2019) for CIFAR10 and FLICKR-AES datasets, which proposed the `FedPer` model. Specifically, it replicates the results found in figures 2, 4, 7, and 8 in their paper. __Note__ that there is typo in the caption of Figure 4 in the article, it should be CIFAR10 and __not__ CIFAR100. + +**Datasets:** CIFAR10 from PyTorch's Torchvision and FLICKR-AES. FLICKR-AES was proposed as dataset in _Personalized Image Aesthetics_ (Ren et al., 2017) and can be downloaded using a link provided on thier [GitHub](https://github.com/alanspike/personalizedImageAesthetics). One must first download FLICKR-AES-001.zip (5.76GB), extract all inside and place in baseline/FedPer/datasets. To this location, also download the other 2 related files: (1) FLICKR-AES_image_labeled_by_each_worker.csv, and (2) FLICKR-AES_image_score.txt. Images are also scaled to 224x224 for both datasets. This is not explicitly stated in the paper but seems to be boosting performance. Also, for FLICKR dataset, it is stated in the paper that they use data from clients with more than 60 and less than 290 rated images. This amounts to circa 60 clients and we randomly select 30 out of these (as in paper). Therefore, the results might differ somewhat but only slighly. Since the pre-processing steps in the paper are somewhat obscure, the metric values in the plots below may differ slightly, but not the overall results and findings. + +```bash +# These steps are not needed if you are only interested in CIFAR-10 + +# Create the `datasets` directory if it doesn't exist already +mkdir datasets + +# move/copy the downloaded FLICKR-AES-001.zip file to `datasets/` + +# unzip dataset to a directory named `flickr` +cd datasets +unzip FLICKR-AES-001.zip -d flickr + +# then move the .csv files inside flickr +mv FLICKR-AES_image_labeled_by_each_worker.csv flickr +mv FLICKR-AES_image_score.txt flickr +``` + +**Hardware Setup:** Experiments have been carried out on GPU. 2 different computers managed to run experiments: + +- GeForce RTX 3080 16GB +- GeForce RTX 4090 24GB + +It's worth mentioning that GPU memory for each client is ~7.5GB. When training on powerful GPUs, one can reduce ratio of GPU needed for each client in the configuration setting to e.g. `num_gpus` to 0.33. + +> NOTE: One experiment carried out using 1 GPU (RTX 4090) takes somehwere between 1-3h depending on dataset and model. Running ResNet34 compared to MobileNet-v1 takes approximately 10-15% longer. + +**Contributors:** [William Lindskog](https://github.com/WilliamLindskog) + + +## Experimental Setup + +**Task:** Image Classification + +**Model:** This directory implements 2 models: + +- ResNet34 which can be imported directly (after having installed the packages) from PyTorch, using `from torchvision.models import resnet34 +- MobileNet-v1 + +Please see how models are implemented using a so called model_manager and model_split class since FedPer uses head and base layers in a neural network. These classes are defined in the models.py file and thereafter called when building new models in the directory /implemented_models. Please, extend and add new models as you wish. + +**Dataset:** CIFAR10, FLICKR-AES. CIFAR10 will be partitioned based on number of classes for data that each client shall recieve e.g. 4 allocated classes could be [1, 3, 5, 9]. FLICKR-AES is an unbalanced dataset, so there we only apply random sampling. + +**Training Hyperparameters:** The hyperparameters can be found in conf/base.yaml file which is the configuration file for the main script. + +| Description | Default Value | +| ----------- | ----- | +| num_clients | 10 | +| clients per round | 10 | +| number of rounds | 50 | +| client resources | {'num_cpus': 4, 'num_gpus': 1 }| +| learning_rate | 0.01 | +| batch_size | 128 | +| optimizer | SGD | +| algorithm | fedavg| + +**Stateful Clients:** +In this Baseline (FedPer), we must store the state of the local client head while aggregation of body parameters happen at the server. Flower is currently making this possible but for the time being, we reside to storing client _head_ state in a folder called client_states. We store the values after each fit and evaluate function carried out on each client, and call for the state before executing these funcitons. Moreover, the state of a unique client is accessed using the client ID. + +> NOTE: This is a work-around so that the local head parameters are not reset before each fit and evaluate. Nevertheless, it can come to change with future releases. + + +## Environment Setup + +To construct the Python environment follow these steps: + +```bash +# Set Python 3.10 +pyenv local 3.10.6 +# Tell poetry to use python 3.10 +poetry env use 3.10.6 + +# Install the base Poetry environment +poetry install + +# Activate the environment +poetry shell +``` + +## Running the Experiments +```bash +python -m fedper.main # this will run using the default settings in the `conf/base.yaml` + +# When running models for flickr dataset, it is important to keep batch size at 4 or lower since some clients (for reproducing experiment) will have very few examples of one class +``` + +While the config files contain a large number of settings, the ones below are the main ones you'd likely want to modify to . +```bash +algorithm: fedavg, fedper # these are currently supported +server_device: 'cuda:0', 'cpu' +dataset.name: 'cifar10', 'flickr' +num_classes: 10, 5 # respectively +dataset.num_classes: 4, 8, 10 # for non-iid split assigning n num_classes to each client (these numbers for CIFAR10 experiments) +model_name: mobile, resnet +``` + +To run multiple runs, one can also reside to `HYDRA`'s multirun option. +```bash +# for CIFAR10 +python -m fedper.main --multirun --config_name cifar10 dataset.num_classes=4,8,10 model_name=resnet,mobile algorithm=fedper,fedavg model.num_head_layers=2,3 + +# to repeat each run 5 times, one can also add +python -m fedper.main --multirun --config_name cifar10 dataset.num_classes=4,8,10 model_name=resnet,mobile algorithm=fedper,fedavg model.num_head_layers=2,3 '+repeat_num=range(5)' +``` + + +## Expected Results + +To reproduce figures make `fedper/run_figures.sh` executable and run it. By default all experiments will be run: + +```bash +# Make fedper/run_figures.sh executable +chmod u+x fedper/run_figures.sh +# Run the script +bash fedper/run_figures.sh +``` + +Having run the `run_figures.sh`, the expected results should look something like this: + +**MobileNet-v1 and ResNet-34 on CIFAR10** + + + +**MobileNet-v1 and ResNet-34 on CIFAR10 using varying size of head** + + + +**MobileNet-v1 and ResNet-34 on FLICKR-AES** + + \ No newline at end of file diff --git a/baselines/fedper/_static/mobile_plot_figure_2.png b/baselines/fedper/_static/mobile_plot_figure_2.png new file mode 100644 index 000000000000..b485b850fb39 Binary files /dev/null and b/baselines/fedper/_static/mobile_plot_figure_2.png differ diff --git a/baselines/fedper/_static/mobile_plot_figure_flickr.png b/baselines/fedper/_static/mobile_plot_figure_flickr.png new file mode 100644 index 000000000000..76e99927df36 Binary files /dev/null and b/baselines/fedper/_static/mobile_plot_figure_flickr.png differ diff --git a/baselines/fedper/_static/mobile_plot_figure_num_head.png b/baselines/fedper/_static/mobile_plot_figure_num_head.png new file mode 100644 index 000000000000..9dcb9f0a3f33 Binary files /dev/null and b/baselines/fedper/_static/mobile_plot_figure_num_head.png differ diff --git a/baselines/fedper/_static/resnet_plot_figure_2.png b/baselines/fedper/_static/resnet_plot_figure_2.png new file mode 100644 index 000000000000..14e3a7145a23 Binary files /dev/null and b/baselines/fedper/_static/resnet_plot_figure_2.png differ diff --git a/baselines/fedper/_static/resnet_plot_figure_flickr.png b/baselines/fedper/_static/resnet_plot_figure_flickr.png new file mode 100644 index 000000000000..4e6ba71489b7 Binary files /dev/null and b/baselines/fedper/_static/resnet_plot_figure_flickr.png differ diff --git a/baselines/fedper/_static/resnet_plot_figure_num_head.png b/baselines/fedper/_static/resnet_plot_figure_num_head.png new file mode 100644 index 000000000000..03c6ac88b84a Binary files /dev/null and b/baselines/fedper/_static/resnet_plot_figure_num_head.png differ diff --git a/baselines/fedper/fedper/__init__.py b/baselines/fedper/fedper/__init__.py new file mode 100644 index 000000000000..a5e567b59135 --- /dev/null +++ b/baselines/fedper/fedper/__init__.py @@ -0,0 +1 @@ +"""Template baseline package.""" diff --git a/baselines/fedper/fedper/client.py b/baselines/fedper/fedper/client.py new file mode 100644 index 000000000000..83babbd9613f --- /dev/null +++ b/baselines/fedper/fedper/client.py @@ -0,0 +1,353 @@ +"""Client implementation - can call FedPer and FedAvg clients.""" +import pickle +from collections import OrderedDict, defaultdict +from pathlib import Path +from typing import Any, Callable, Dict, List, Tuple, Type, Union + +import numpy as np +import torch +from flwr.client import NumPyClient +from flwr.common import NDArrays, Scalar +from omegaconf import DictConfig +from torch.utils.data import DataLoader, Subset, random_split +from torchvision import transforms +from torchvision.datasets import ImageFolder + +from fedper.constants import MEAN, STD +from fedper.dataset_preparation import call_dataset +from fedper.implemented_models.mobile_model import MobileNetModelManager +from fedper.implemented_models.resnet_model import ResNetModelManager + +PROJECT_DIR = Path(__file__).parent.parent.absolute() + + +class ClientDataloaders: + """Client dataloaders.""" + + def __init__( + self, + trainloader: DataLoader, + testloader: DataLoader, + ) -> None: + """Initialize the client dataloaders.""" + self.trainloader = trainloader + self.testloader = testloader + + +class ClientEssentials: + """Client essentials.""" + + def __init__( + self, + client_id: str, + client_state_save_path: str = "", + ) -> None: + """Set client state save path and client ID.""" + self.client_id = int(client_id) + self.client_state_save_path = ( + (client_state_save_path + f"/client_{self.client_id}") + if client_state_save_path != "" + else None + ) + + +class BaseClient(NumPyClient): + """Implementation of Federated Averaging (FedAvg) Client.""" + + def __init__( + self, + data_loaders: ClientDataloaders, + config: DictConfig, + client_essentials: ClientEssentials, + model_manager_class: Union[ + Type[MobileNetModelManager], Type[ResNetModelManager] + ], + ): + """Initialize client attributes. + + Args: + config: dictionary containing the client configurations. + client_id: id of the client. + model_manager_class: class to be used as the model manager. + """ + super().__init__() + + self.train_id = 1 + self.test_id = 1 + self.client_id = int(client_essentials.client_id) + self.client_state_save_path = client_essentials.client_state_save_path + self.hist: Dict[str, Dict[str, Any]] = defaultdict(dict) + self.num_epochs: int = config["num_epochs"] + self.model_manager = model_manager_class( + client_id=self.client_id, + config=config, + trainloader=data_loaders.trainloader, + testloader=data_loaders.testloader, + client_save_path=self.client_state_save_path, + learning_rate=config["learning_rate"], + ) + + def get_parameters(self, config: Dict[str, Scalar]) -> NDArrays: + """Return the current local model parameters.""" + return self.model_manager.model.get_parameters() + + def set_parameters( + self, parameters: List[np.ndarray], evaluate: bool = False + ) -> None: + """Set the local model parameters to the received parameters. + + Args: + parameters: parameters to set the model to. + """ + _ = evaluate + model_keys = [ + k + for k in self.model_manager.model.state_dict().keys() + if k.startswith("_body") or k.startswith("_head") + ] + params_dict = zip(model_keys, parameters) + + state_dict = OrderedDict({k: torch.tensor(v) for k, v in params_dict}) + + self.model_manager.model.set_parameters(state_dict) + + def perform_train( + self, + ) -> Dict[str, Union[List[Dict[str, float]], int, float]]: + """Perform local training to the whole model. + + Returns + ------- + Dict with the train metrics. + """ + epochs = self.num_epochs + + self.model_manager.model.enable_body() + self.model_manager.model.enable_head() + + return self.model_manager.train( + epochs=epochs, + ) + + def fit( + self, parameters: NDArrays, config: Dict[str, Scalar] + ) -> Tuple[NDArrays, int, Dict[str, Union[bool, bytes, float, int, str]]]: + """Train the provided parameters using the locally held dataset. + + Args: + parameters: The current (global) model parameters. + config: configuration parameters for training sent by the server. + + Returns + ------- + Tuple containing the locally updated model parameters, \ + the number of examples used for training and \ + the training metrics. + """ + self.set_parameters(parameters) + + train_results = self.perform_train() + + # Update train history + self.hist[str(self.train_id)] = { + **self.hist[str(self.train_id)], + "trn": train_results, + } + print("<------- TRAIN RESULTS -------> :", train_results) + + self.train_id += 1 + + return self.get_parameters(config), self.model_manager.train_dataset_size(), {} + + def evaluate( + self, parameters: NDArrays, config: Dict[str, Scalar] + ) -> Tuple[float, int, Dict[str, Union[bool, bytes, float, int, str]]]: + """Evaluate the provided global parameters using the locally held dataset. + + Args: + parameters: The current (global) model parameters. + config: configuration parameters for training sent by the server. + + Returns + ------- + Tuple containing the test loss, \ + the number of examples used for evaluation and \ + the evaluation metrics. + """ + self.set_parameters(parameters, evaluate=True) + + # Test the model + tst_results = self.model_manager.test() + print("<------- TEST RESULTS -------> :", tst_results) + + # Update test history + self.hist[str(self.test_id)] = { + **self.hist[str(self.test_id)], + "tst": tst_results, + } + self.test_id += 1 + + return ( + tst_results.get("loss", 0.0), + self.model_manager.test_dataset_size(), + {k: v for k, v in tst_results.items() if not isinstance(v, (dict, list))}, + ) + + +class FedPerClient(BaseClient): + """Implementation of Federated Personalization (FedPer) Client.""" + + def get_parameters(self, config: Dict[str, Scalar]) -> NDArrays: + """Return the current local body parameters.""" + return [ + val.cpu().numpy() + for _, val in self.model_manager.model.body.state_dict().items() + ] + + def set_parameters(self, parameters: List[np.ndarray], evaluate=False) -> None: + """Set the local body parameters to the received parameters. + + Args: + parameters: parameters to set the body to. + evaluate: whether the client is evaluating or not. + """ + model_keys = [ + k + for k in self.model_manager.model.state_dict().keys() + if k.startswith("_body") + ] + + if not evaluate: + # Only update client's local head if it hasn't trained yet + print("Setting head parameters to global head parameters.") + model_keys.extend( + [ + k + for k in self.model_manager.model.state_dict().keys() + if k.startswith("_head") + ] + ) + + params_dict = zip(model_keys, parameters) + + state_dict = OrderedDict({k: torch.tensor(v) for k, v in params_dict}) + + self.model_manager.model.set_parameters(state_dict) + + +def get_client_fn_simulation( + config: DictConfig, + client_state_save_path: str = "", +) -> Callable[[str], Union[FedPerClient, BaseClient]]: + """Generate the client function that creates the Flower Clients. + + Parameters + ---------- + model : DictConfig + The model configuration. + cleint_state_save_path : str + The path to save the client state. + + Returns + ------- + Tuple[Callable[[str], FlowerClient], DataLoader] + A tuple containing the client function that creates Flower Clients and + the DataLoader that will be used for testing + """ + assert config.model_name.lower() in [ + "mobile", + "resnet", + ], f"Model {config.model.name} not implemented" + + # load dataset and clients' data indices + if config.dataset.name.lower() == "cifar10": + try: + partition_path = ( + PROJECT_DIR / "datasets" / config.dataset.name / "partition.pkl" + ) + print(f"Loading partition from {partition_path}") + with open(partition_path, "rb") as pickle_file: + partition = pickle.load(pickle_file) + data_indices: Dict[int, Dict[str, List[int]]] = partition["data_indices"] + except FileNotFoundError as error: + print(f"Partition not found at {partition_path}") + raise error + + # - you can define your own data transformation strategy here - + general_data_transform = transforms.Compose( + [ + transforms.Resize((224, 224)), + transforms.RandomCrop(224, padding=4), + # transforms.RandomHorizontalFlip(), + # transforms.ToTensor(), + transforms.Normalize( + MEAN[config.dataset.name], STD[config.dataset.name] + ), + ] + ) + # ------------------------------------------------------------ + + def client_fn(cid: str) -> BaseClient: + """Create a Flower client representing a single organization.""" + cid_use = int(cid) + if config.dataset.name.lower() == "flickr": + transform = transforms.Compose( + [ + transforms.Resize((224, 224)), + transforms.ToTensor(), + ] + ) + data_path = ( + PROJECT_DIR / "datasets" / config.dataset.name / "tmp" / f"client_{cid}" + ) + dataset = ImageFolder(root=data_path, transform=transform) + trainset, testset = random_split( + dataset, + [int(len(dataset) * 0.8), len(dataset) - int(len(dataset) * 0.8)], + ) + else: + dataset = call_dataset( + dataset_name=config.dataset.name, + root=PROJECT_DIR / "datasets" / config.dataset.name, + general_data_transform=general_data_transform, + ) + + trainset = Subset(dataset, indices=[]) + testset = Subset(dataset, indices=[]) + trainset.indices = data_indices[cid_use]["train"] + testset.indices = data_indices[cid_use]["test"] + + # Create the train loader + trainloader = DataLoader(trainset, config.batch_size, shuffle=False) + # Create the test loader + testloader = DataLoader(testset, config.batch_size) + + manager: Union[ + Type[MobileNetModelManager], Type[ResNetModelManager] + ] = MobileNetModelManager + if config.model_name.lower() == "resnet": + manager = ResNetModelManager + elif config.model_name.lower() == "mobile": + manager = MobileNetModelManager + else: + raise NotImplementedError("Model not implemented, check name.") + client_data_loaders = ClientDataloaders(trainloader, testloader) + client_essentials = ClientEssentials( + client_id=cid, + client_state_save_path=client_state_save_path, + ) + if client_state_save_path != "": + return FedPerClient( + data_loaders=client_data_loaders, + client_essentials=client_essentials, + config=config, + model_manager_class=manager, + ) + return BaseClient( + data_loaders=client_data_loaders, + client_essentials=client_essentials, + config=config, + model_manager_class=manager, + ) + + return client_fn diff --git a/baselines/fedper/fedper/conf/base.yaml b/baselines/fedper/fedper/conf/base.yaml new file mode 100644 index 000000000000..b0b9778d4682 --- /dev/null +++ b/baselines/fedper/fedper/conf/base.yaml @@ -0,0 +1,44 @@ +--- +num_clients: 10 # total number of clients +num_epochs: 4 # number of local epochs +batch_size: 128 +num_rounds: 100 +clients_per_round: 10 +learning_rate: 0.01 +algorithm: fedper +model_name: resnet + +client_resources: + num_cpus: 4 + num_gpus: 1 + +server_device: cuda:0 + +dataset: + name : "cifar10" + split: sample + num_classes: 10 + seed: 42 + num_clients: ${num_clients} + fraction: 0.83 + +model: + _target_: null + num_head_layers: 2 + num_classes: 10 + +fit_config: + drop_client: false + epochs : ${num_epochs} + batch_size: ${batch_size} + +strategy: + _target_: fedPer.server.DefaultStrategyPipeline + fraction_fit: 0.00001 # because we want the number of clients to sample on each roudn to be solely defined by min_fit_clients + min_fit_clients: ${clients_per_round} + fraction_evaluate: 0.0 + min_evaluate_clients: ${clients_per_round} + min_available_clients: ${num_clients} + algorithm: ${algorithm} + evaluate_fn: None + on_evaluate_config_fn: None \ No newline at end of file diff --git a/baselines/fedper/fedper/conf/cifar10.yaml b/baselines/fedper/fedper/conf/cifar10.yaml new file mode 100644 index 000000000000..66a06d481507 --- /dev/null +++ b/baselines/fedper/fedper/conf/cifar10.yaml @@ -0,0 +1,44 @@ +--- +num_clients: 10 # total number of clients +num_epochs: 4 # number of local epochs +batch_size: 128 +num_rounds: 50 +clients_per_round: 10 +learning_rate: 0.01 +algorithm: fedavg +model_name: resnet + +client_resources: + num_cpus: 4 + num_gpus: 1 + +server_device: cuda:0 + +dataset: + name : "cifar10" + split: sample + num_classes: 10 + seed: 42 + num_clients: ${num_clients} + fraction: 0.83 + +model: + _target_: null + num_head_layers: 2 + num_classes: 10 + +fit_config: + drop_client: false + epochs : ${num_epochs} + batch_size: ${batch_size} + +strategy: + _target_: fedPer.server.DefaultStrategyPipeline + fraction_fit: 0.00001 # because we want the number of clients to sample on each roudn to be solely defined by min_fit_clients + min_fit_clients: ${clients_per_round} + fraction_evaluate: 0.0 + min_evaluate_clients: ${clients_per_round} + min_available_clients: ${num_clients} + algorithm: ${algorithm} + evaluate_fn: None + on_evaluate_config_fn: None \ No newline at end of file diff --git a/baselines/fedper/fedper/conf/flickr.yaml b/baselines/fedper/fedper/conf/flickr.yaml new file mode 100644 index 000000000000..341b1c0ac6c2 --- /dev/null +++ b/baselines/fedper/fedper/conf/flickr.yaml @@ -0,0 +1,44 @@ +--- +num_clients: 30 # total number of clients +num_epochs: 4 # number of local epochs +batch_size: 4 +num_rounds: 35 +clients_per_round: 30 +learning_rate: 0.01 +algorithm: fedper +model_name: resnet + +client_resources: + num_cpus: 4 + num_gpus: 1 + +server_device: cuda:0 + +dataset: + name : "flickr" + split: sample + num_classes: 5 + seed: 42 + num_clients: ${num_clients} + fraction: 0.80 + +model: + _target_: null + num_head_layers: 2 + num_classes: 5 + +fit_config: + drop_client: false + epochs : ${num_epochs} + batch_size: ${batch_size} + +strategy: + _target_: fedPer.server.DefaultStrategyPipeline + fraction_fit: 0.00001 # because we want the number of clients to sample on each roudn to be solely defined by min_fit_clients + min_fit_clients: ${clients_per_round} + fraction_evaluate: 0.0 + min_evaluate_clients: ${clients_per_round} + min_available_clients: ${num_clients} + algorithm: ${algorithm} + evaluate_fn: None + on_evaluate_config_fn: None \ No newline at end of file diff --git a/baselines/fedper/fedper/constants.py b/baselines/fedper/fedper/constants.py new file mode 100644 index 000000000000..3eda77c5134e --- /dev/null +++ b/baselines/fedper/fedper/constants.py @@ -0,0 +1,23 @@ +"""Constants used in machine learning pipeline.""" +from enum import Enum + + +# FL Algorithms +class Algorithms(Enum): + """Enum for FL algorithms.""" + + FEDAVG = "FedAvg" + FEDPER = "FedPer" + + +# FL Default Train and Fine-Tuning Epochs +DEFAULT_TRAIN_EP = 5 +DEFAULT_FT_EP = 5 + +MEAN = { + "cifar10": [0.4915, 0.4823, 0.4468], +} + +STD = { + "cifar10": [0.2470, 0.2435, 0.2616], +} diff --git a/baselines/fedper/fedper/dataset.py b/baselines/fedper/fedper/dataset.py new file mode 100644 index 000000000000..81a95286b1b8 --- /dev/null +++ b/baselines/fedper/fedper/dataset.py @@ -0,0 +1,85 @@ +"""Handle basic dataset creation. + +In case of PyTorch it should return dataloaders for your dataset (for both the clients +and the server). If you are using a custom dataset class, this module is the place to +define it. If your dataset requires to be downloaded (and this is not done +automatically -- e.g. as it is the case for many dataset in TorchVision) and +partitioned, please include all those functions and logic in the +`dataset_preparation.py` module. You can use all those functions from functions/methods +defined here of course. +""" +import os +import pickle +import sys +from pathlib import Path + +import numpy as np + +from fedper.dataset_preparation import ( + call_dataset, + flickr_preprocess, + randomly_assign_classes, +) + +# working dir is two up +WORKING_DIR = Path(__file__).resolve().parent.parent +FL_BENCH_ROOT = WORKING_DIR.parent + +sys.path.append(FL_BENCH_ROOT.as_posix()) + + +def dataset_main(config: dict) -> None: + """Prepare the dataset.""" + dataset_name = config["name"].lower() + dataset_folder = Path(WORKING_DIR, "datasets") + dataset_root = Path(dataset_folder, dataset_name) + + if not os.path.isdir(dataset_root): + os.makedirs(dataset_root) + + if dataset_name == "cifar10": + dataset = call_dataset(dataset_name=dataset_name, root=dataset_root) + + # randomly assign classes + assert config["num_classes"] > 0, "Number of classes must be positive" + config["num_classes"] = max(1, min(config["num_classes"], len(dataset.classes))) + # partition, stats = randomly_assign_classes( + partition = randomly_assign_classes( + dataset=dataset, + client_num=config["num_clients"], + class_num=config["num_classes"], + ) + + clients_4_train = list(range(config["num_clients"])) + clients_4_test = list(range(config["num_clients"])) + + partition["separation"] = { + "train": clients_4_train, + "test": clients_4_test, + "total": config["num_clients"], + } + for client_id, idx in enumerate(partition["data_indices"]): + if config["split"] == "sample": + num_train_samples = int(len(idx) * config["fraction"]) + + np.random.shuffle(idx) + idx_train, idx_test = idx[:num_train_samples], idx[num_train_samples:] + partition["data_indices"][client_id] = { + "train": idx_train, + "test": idx_test, + } + else: + if client_id in clients_4_train: + partition["data_indices"][client_id] = {"train": idx, "test": []} + else: + partition["data_indices"][client_id] = {"train": [], "test": idx} + with open(dataset_root / "partition.pkl", "wb") as pickle_file: + pickle.dump(partition, pickle_file) + + # with open(dataset_root / "all_stats.json", "w") as f: + # json.dump(stats, f) + + elif dataset_name.lower() == "flickr": + flickr_preprocess(dataset_root, config) + else: + raise RuntimeError("Please implement the dataset preparation for your dataset.") diff --git a/baselines/fedper/fedper/dataset_preparation.py b/baselines/fedper/fedper/dataset_preparation.py new file mode 100644 index 000000000000..0b8b53782aac --- /dev/null +++ b/baselines/fedper/fedper/dataset_preparation.py @@ -0,0 +1,209 @@ +"""Dataset preparation.""" +import os +import random +from collections import Counter +from pathlib import Path +from typing import Any, Dict, List, Union + +import numpy as np +import pandas as pd +import torch +import torchvision +from torch.utils.data import Dataset +from torchvision import transforms + + +class BaseDataset(Dataset): + """Base class for all datasets.""" + + def __init__( + self, + root: Path = Path("datasets/cifar10"), + general_data_transform: transforms.transforms.Compose = None, + ) -> None: + """Initialize the dataset.""" + self.root = root + self.classes = None + self.data: torch.tensor = None + self.targets: torch.tensor = None + self.general_data_transform = general_data_transform + + def __getitem__(self, index): + """Get the item at the given index.""" + data, targets = self.data[index], self.targets[index] + if self.general_data_transform is not None: + data = self.general_data_transform(data) + return data, targets + + def __len__(self): + """Return the length of the dataset.""" + return len(self.targets) + + +class CIFAR10(BaseDataset): + """CIFAR10 dataset.""" + + def __init__( + self, + root: Path = Path("datasets/cifar10"), + general_data_transform=None, + ): + super().__init__() + train_part = torchvision.datasets.CIFAR10(root, True, download=True) + test_part = torchvision.datasets.CIFAR10(root, False, download=True) + train_data = torch.tensor(train_part.data).permute([0, -1, 1, 2]).float() + test_data = torch.tensor(test_part.data).permute([0, -1, 1, 2]).float() + train_targets = torch.tensor(train_part.targets).long().squeeze() + test_targets = torch.tensor(test_part.targets).long().squeeze() + self.data = torch.cat([train_data, test_data]) + self.targets = torch.cat([train_targets, test_targets]) + self.classes = train_part.classes + self.general_data_transform = general_data_transform + + +def flickr_preprocess(root, config): + """Preprocess the FLICKR dataset.""" + print("Preprocessing FLICKR dataset...") + # create a tmp folder to store the preprocessed data + tmp_folder = Path(root, "tmp") + if not os.path.isdir(tmp_folder): + os.makedirs(tmp_folder) + + # remove any folder or file in tmp folder, even if it is not empty + os.system(f"rm -rf {tmp_folder.as_posix()}/*") + + # get number of clients + num_clients = config["num_clients"] + # get flickr image labels per clients + df_labelled_igms = pd.read_csv( + Path(root, "FLICKR-AES_image_labeled_by_each_worker.csv") + ) + # take num_clients random workers from df + # #where workers have minimum 60 images and maximum 290 + df_labelled_igms = df_labelled_igms.groupby("worker").filter( + lambda x: len(x) >= 60 and len(x) <= 290 + ) + # only take workers that have at least 1 image for each score (1-5) + df_labelled_igms = df_labelled_igms.groupby("worker").filter( + lambda x: len(x[" score"].unique()) == 5 + ) + df_labelled_igms = df_labelled_igms.groupby("worker").filter( + lambda x: x[" score"].value_counts().min() >= 4 + ) + # only take workers that have at least 4 images for each score (1-5) + + # get num_clients random workers + clients = np.random.choice( + df_labelled_igms["worker"].unique(), num_clients, replace=False + ) + for i, client in enumerate(clients): + print(f"Processing client {i}...") + df_client = df_labelled_igms[df_labelled_igms["worker"] == client] + client_path = Path(tmp_folder, f"client_{i}") + if not os.path.isdir(client_path): + os.makedirs(client_path) + # create score folder in client folder, scores go from 1-5 + for score in range(1, 6): + score_path = Path(client_path, str(score)) + if not os.path.isdir(score_path): + os.makedirs(score_path) + # copy images to score folder + for _, row in df_client.iterrows(): + img_path = Path(root, "40K", row[" imagePair"]) + score_path = Path(client_path, str(row[" score"])) + if os.path.isfile(img_path): + os.system(f"cp {img_path} {score_path}") + + +def call_dataset(dataset_name, root, **kwargs): + """Call the dataset.""" + if dataset_name == "cifar10": + return CIFAR10(root, **kwargs) + raise ValueError(f"Dataset {dataset_name} not supported.") + + +def randomly_assign_classes( + dataset: Dataset, client_num: int, class_num: int +) -> Dict[str, Union[Dict[Any, Any], List[Any]]]: + # ) -> Dict[str, Any]: + """Randomly assign number classes to clients.""" + partition: Dict[str, Union[Dict, List]] = {"separation": {}, "data_indices": []} + data_indices: List[List[int]] = [[] for _ in range(client_num)] + targets_numpy = np.array(dataset.targets, dtype=np.int32) + label_list = list(range(len(dataset.classes))) + + data_idx_for_each_label = [ + np.where(targets_numpy == i)[0].tolist() for i in label_list + ] + + assigned_labels = [] + selected_times = [0 for _ in label_list] + for _ in range(client_num): + sampled_labels = random.sample(label_list, class_num) + assigned_labels.append(sampled_labels) + for j in sampled_labels: + selected_times[j] += 1 + + batch_sizes = _get_batch_sizes( + targets_numpy=targets_numpy, + label_list=label_list, + selected_times=selected_times, + ) + + data_indices = _get_data_indices( + batch_sizes=batch_sizes, + data_indices=data_indices, + data_idx_for_each_label=data_idx_for_each_label, + assigned_labels=assigned_labels, + client_num=client_num, + ) + + partition["data_indices"] = data_indices + + return partition # , stats + + +def _get_batch_sizes( + targets_numpy: np.ndarray, + label_list: List[int], + selected_times: List[int], +) -> np.ndarray: + """Get batch sizes for each label.""" + labels_count = Counter(targets_numpy) + batch_sizes = np.zeros_like(label_list) + for i in label_list: + print(f"label: {i}, count: {labels_count[i]}") + print(f"selected times: {selected_times[i]}") + batch_sizes[i] = int(labels_count[i] / selected_times[i]) + + return batch_sizes + + +def _get_data_indices( + batch_sizes: np.ndarray, + data_indices: List[List[int]], + data_idx_for_each_label: List[List[int]], + assigned_labels: List[List[int]], + client_num: int, +) -> List[List[int]]: + for i in range(client_num): + for cls in assigned_labels[i]: + if len(data_idx_for_each_label[cls]) < 2 * batch_sizes[cls]: + batch_size = len(data_idx_for_each_label[cls]) + else: + batch_size = batch_sizes[cls] + selected_idx = random.sample(data_idx_for_each_label[cls], batch_size) + data_indices_use: np.ndarray = np.concatenate( + [data_indices[i], selected_idx], axis=0 + ).astype(np.int64) + data_indices[i] = data_indices_use.tolist() + # data_indices[i]: np.ndarray = np.concatenate( + # [data_indices[i], selected_idx], axis=0 + # ).astype(np.int64) + data_idx_for_each_label[cls] = list( + set(data_idx_for_each_label[cls]) - set(selected_idx) + ) + + data_indices[i] = data_indices[i] + + return data_indices diff --git a/baselines/fedper/fedper/implemented_models/mobile_model.py b/baselines/fedper/fedper/implemented_models/mobile_model.py new file mode 100644 index 000000000000..57d3210c9511 --- /dev/null +++ b/baselines/fedper/fedper/implemented_models/mobile_model.py @@ -0,0 +1,258 @@ +"""MobileNet-v1 model, model manager and model split.""" +from typing import Dict, List, Optional, Tuple, Union + +import torch +import torch.nn as nn +from omegaconf import DictConfig +from torch.utils.data import DataLoader + +from fedper.models import ModelManager, ModelSplit + +# Set model architecture +ARCHITECTURE = { + "layer_1": {"conv_dw": [32, 64, 1]}, + "layer_2": {"conv_dw": [64, 128, 2]}, + "layer_3": {"conv_dw": [128, 128, 1]}, + "layer_4": {"conv_dw": [128, 256, 2]}, + "layer_5": {"conv_dw": [256, 256, 1]}, + "layer_6": {"conv_dw": [256, 512, 2]}, + "layer_7": {"conv_dw": [512, 512, 1]}, + "layer_8": {"conv_dw": [512, 512, 1]}, + "layer_9": {"conv_dw": [512, 512, 1]}, + "layer_10": {"conv_dw": [512, 512, 1]}, + "layer_11": {"conv_dw": [512, 512, 1]}, + "layer_12": {"conv_dw": [512, 1024, 2]}, + "layer_13": {"conv_dw": [1024, 1024, 1]}, +} + + +class MobileNet(nn.Module): + """Model from MobileNet-v1 (https://github.com/wjc852456/pytorch-mobilenet-v1).""" + + def __init__( + self, + num_head_layers: int = 1, + num_classes: int = 10, + ) -> None: + super(MobileNet, self).__init__() + + self.architecture = ARCHITECTURE + + def conv_bn(inp, oup, stride): + return nn.Sequential( + nn.Conv2d(inp, oup, 3, stride, 1, bias=False), + nn.BatchNorm2d(oup), + nn.ReLU(inplace=True), + ) + + def conv_dw(inp, oup, stride): + return nn.Sequential( + nn.Conv2d(inp, inp, 3, stride, 1, groups=inp, bias=False), + nn.BatchNorm2d(inp), + nn.ReLU(inplace=True), + nn.Conv2d(inp, oup, 1, 1, 0, bias=False), + nn.BatchNorm2d(oup), + nn.ReLU(inplace=True), + ) + + self.body = nn.Sequential() + self.body.add_module("initial_batch_norm", conv_bn(3, 32, 2)) + for i in range(1, 13): + for _, value in self.architecture[f"layer_{i}"].items(): + self.body.add_module(f"conv_dw_{i}", conv_dw(*value)) + + self.body.add_module("avg_pool", nn.AvgPool2d([7])) + self.body.add_module("fc", nn.Linear(1024, num_classes)) + + if num_head_layers == 1: + self.head = nn.Sequential( + nn.AvgPool2d([7]), nn.Flatten(), nn.Linear(1024, num_classes) + ) + self.body.avg_pool = nn.Identity() + self.body.fc = nn.Identity() + elif num_head_layers == 2: + self.head = nn.Sequential( + conv_dw(1024, 1024, 1), + nn.AvgPool2d([7]), + nn.Flatten(), + nn.Linear(1024, num_classes), + ) + self.body.conv_dw_13 = nn.Identity() + self.body.avg_pool = nn.Identity() + self.body.fc = nn.Identity() + elif num_head_layers == 3: + self.head = nn.Sequential( + conv_dw(512, 1024, 2), + conv_dw(1024, 1024, 1), + nn.AvgPool2d([7]), + nn.Flatten(), + nn.Linear(1024, num_classes), + ) + self.body.conv_dw_12 = nn.Identity() + self.body.conv_dw_13 = nn.Identity() + self.body.avg_pool = nn.Identity() + self.body.fc = nn.Identity() + elif num_head_layers == 4: + self.head = nn.Sequential( + conv_dw(512, 512, 1), + conv_dw(512, 1024, 2), + conv_dw(1024, 1024, 1), + nn.AvgPool2d([7]), + nn.Flatten(), + nn.Linear(1024, num_classes), + ) + self.body.conv_dw_11 = nn.Identity() + self.body.conv_dw_12 = nn.Identity() + self.body.conv_dw_13 = nn.Identity() + self.body.avg_pool = nn.Identity() + self.body.fc = nn.Identity() + else: + raise NotImplementedError("Number of head layers not implemented.") + + def forward(self, x: torch.Tensor) -> torch.Tensor: + """Forward pass of the model.""" + x = self.body(x) + return self.head(x) + + +class MobileNetModelSplit(ModelSplit): + """Split MobileNet model into body and head.""" + + def _get_model_parts(self, model: MobileNet) -> Tuple[nn.Module, nn.Module]: + return model.body, model.head + + +class MobileNetModelManager(ModelManager): + """Manager for models with Body/Head split.""" + + def __init__( + self, + client_id: int, + config: DictConfig, + trainloader: DataLoader, + testloader: DataLoader, + client_save_path: Optional[str] = "", + learning_rate: float = 0.01, + ): + """Initialize the attributes of the model manager. + + Args: + client_id: The id of the client. + config: Dict containing the configurations to be used by the manager. + """ + super().__init__( + model_split_class=MobileNetModelSplit, + client_id=client_id, + config=config, + ) + self.trainloader, self.testloader = trainloader, testloader + self.device = self.config["server_device"] + self.client_save_path = client_save_path if client_save_path != "" else None + self.learning_rate = learning_rate + + def _create_model(self) -> nn.Module: + """Return MobileNet-v1 model to be splitted into head and body.""" + try: + return MobileNet( + num_head_layers=self.config["model"]["num_head_layers"], + num_classes=self.config["model"]["num_classes"], + ).to(self.device) + except AttributeError: + self.device = self.config["server_device"] + return MobileNet( + num_head_layers=self.config["model"]["num_head_layers"], + num_classes=self.config["model"]["num_classes"], + ).to(self.device) + + def train( + self, + epochs: int = 1, + ) -> Dict[str, Union[List[Dict[str, float]], int, float]]: + """Train the model maintained in self.model. + + Method adapted from simple MobileNet-v1 (PyTorch) \ + https://github.com/wjc852456/pytorch-mobilenet-v1. + + Args: + epochs: number of training epochs. + + Returns + ------- + Dict containing the train metrics. + """ + # Load client state (head) if client_save_path is not None and it is not empty + if self.client_save_path is not None: + try: + self.model.head.load_state_dict(torch.load(self.client_save_path)) + except FileNotFoundError: + print("No client state found, training from scratch.") + pass + + criterion = torch.nn.CrossEntropyLoss() + optimizer = torch.optim.SGD( + self.model.parameters(), lr=self.learning_rate, momentum=0.9 + ) + correct, total = 0, 0 + loss: torch.Tensor = 0.0 + # self.model.train() + for _ in range(epochs): + for images, labels in self.trainloader: + optimizer.zero_grad() + outputs = self.model(images.to(self.device)) + labels = labels.to(self.device) + loss = criterion(outputs, labels) + loss.backward() + optimizer.step() + total += labels.size(0) + correct += (torch.max(outputs.data, 1)[1] == labels).sum().item() + + # Save client state (head) + if self.client_save_path is not None: + torch.save(self.model.head.state_dict(), self.client_save_path) + + return {"loss": loss.item(), "accuracy": correct / total} + + def test( + self, + ) -> Dict[str, float]: + """Test the model maintained in self.model. + + Returns + ------- + Dict containing the test metrics. + """ + # Load client state (head) + if self.client_save_path is not None: + self.model.head.load_state_dict(torch.load(self.client_save_path)) + + criterion = torch.nn.CrossEntropyLoss() + correct, total, loss = 0, 0, 0.0 + # self.model.eval() + with torch.no_grad(): + for images, labels in self.testloader: + outputs = self.model(images.to(self.device)) + labels = labels.to(self.device) + loss += criterion(outputs, labels).item() + total += labels.size(0) + correct += (torch.max(outputs.data, 1)[1] == labels).sum().item() + print("Test Accuracy: {:.4f}".format(correct / total)) + + if self.client_save_path is not None: + torch.save(self.model.head.state_dict(), self.client_save_path) + + return { + "loss": loss / len(self.testloader.dataset), + "accuracy": correct / total, + } + + def train_dataset_size(self) -> int: + """Return train data set size.""" + return len(self.trainloader) + + def test_dataset_size(self) -> int: + """Return test data set size.""" + return len(self.testloader) + + def total_dataset_size(self) -> int: + """Return total data set size.""" + return len(self.trainloader) + len(self.testloader) diff --git a/baselines/fedper/fedper/implemented_models/resnet_model.py b/baselines/fedper/fedper/implemented_models/resnet_model.py new file mode 100644 index 000000000000..0d9837b118a3 --- /dev/null +++ b/baselines/fedper/fedper/implemented_models/resnet_model.py @@ -0,0 +1,272 @@ +"""ResNet model, model manager and split.""" +from typing import Dict, List, Optional, Tuple, Union + +import torch +import torch.nn as nn +from omegaconf import DictConfig +from torch.utils.data import DataLoader +from torchvision.models.resnet import resnet34 + +from fedper.models import ModelManager, ModelSplit + + +def conv3x3( + in_planes: int, out_planes: int, stride: int = 1, groups: int = 1, dilation: int = 1 +) -> nn.Conv2d: + """3x3 convolution with padding.""" + return nn.Conv2d( + in_planes, + out_planes, + kernel_size=3, + stride=stride, + padding=dilation, + groups=groups, + bias=False, + dilation=dilation, + ) + + +def conv1x1(in_planes: int, out_planes: int, stride: int = 1) -> nn.Conv2d: + """1x1 convolution.""" + return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False) + + +class BasicBlock(nn.Module): + """Basic block for ResNet.""" + + expansion: int = 1 + + def __init__( + self, + inplanes: int, + planes: int, + stride: int = 1, + downsample: Optional[nn.Module] = None, + ) -> None: + super().__init__() + norm_layer = nn.BatchNorm2d + # Both self.conv1 and self.downsample layers downsample input when stride != 1 + self.conv1 = conv3x3(inplanes, planes, stride) + self.bn1 = norm_layer(planes) + self.relu = nn.ReLU(inplace=True) + self.conv2 = conv3x3(planes, planes) + self.bn2 = norm_layer(planes) + self.downsample = downsample + self.stride = stride + + def forward(self, x: torch.Tensor) -> torch.Tensor: + """Forward inputs through the block.""" + identity = x + + out = self.conv1(x) + out = self.bn1(out) + out = self.relu(out) + + out = self.conv2(out) + out = self.bn2(out) + + if self.downsample is not None: + identity = self.downsample(x) + + out += identity + out = self.relu(out) + + return out + + +class ResNet(nn.Module): + """ResNet model.""" + + def __init__( + self, + num_head_layers: int = 1, + num_classes: int = 10, + ) -> None: + super(ResNet, self).__init__() + assert ( + num_head_layers > 0 and num_head_layers <= 17 + ), "num_head_layers must be greater than 0 and less than 16" + + self.num_head_layers = num_head_layers + self.body = resnet34() + + # if only one head layer + if self.num_head_layers == 1: + self.head = self.body.fc + self.body.fc = nn.Identity() + elif self.num_head_layers == 2: + self.head = nn.Sequential( + BasicBlock(512, 512), + nn.AdaptiveAvgPool2d((1, 1)), + nn.Flatten(), + nn.Linear(512, num_classes), + ) + # remove head layers from body + self.body = nn.Sequential(*list(self.body.children())[:-2]) + body_layer4 = list(self.body.children())[-1] + self.body = nn.Sequential(*list(self.body.children())[:-1]) + self.body.layer4 = nn.Sequential(*list(body_layer4.children())[:-1]) + elif self.num_head_layers == 3: + self.head = nn.Sequential( + BasicBlock(512, 512), + BasicBlock(512, 512), + nn.AdaptiveAvgPool2d((1, 1)), + nn.Flatten(), + nn.Linear(512, num_classes), + ) + # remove head layers from body + self.body = nn.Sequential(*list(self.body.children())[:-2]) + body_layer4 = list(self.body.children())[-1] + self.body = nn.Sequential(*list(self.body.children())[:-1]) + self.body.layer4 = nn.Sequential(*list(body_layer4.children())[:-2]) + else: + raise NotImplementedError("Only 1 or 2 head layers supported") + + def forward(self, x: torch.Tensor) -> torch.Tensor: + """Forward inputs through the model.""" + print("Forwarding through ResNet model") + x = self.body(x) + return self.head(x) + + +class ResNetModelSplit(ModelSplit): + """Split ResNet model into body and head.""" + + def _get_model_parts(self, model: ResNet) -> Tuple[nn.Module, nn.Module]: + return model.body, model.head + + +class ResNetModelManager(ModelManager): + """Manager for models with Body/Head split.""" + + def __init__( + self, + client_save_path: Optional[str], + client_id: int, + config: DictConfig, + trainloader: DataLoader, + testloader: DataLoader, + learning_rate: float = 0.01, + ): + """Initialize the attributes of the model manager. + + Args: + client_save_path: Path to save the client state. + client_id: The id of the client. + config: Dict containing the configurations to be used by the manager. + trainloader: DataLoader containing the train data. + testloader: DataLoader containing the test data. + learning_rate: Learning rate for the optimizer. + """ + super().__init__( + model_split_class=ResNetModelSplit, + client_id=client_id, + config=config, + ) + self.client_save_path = client_save_path + self.trainloader, self.testloader = trainloader, testloader + self.device = self.config["server_device"] + self.learning_rate = learning_rate + + def _create_model(self) -> nn.Module: + """Return MobileNet-v1 model to be splitted into head and body.""" + try: + return ResNet( + num_head_layers=self.config["model"]["num_head_layers"], + num_classes=self.config["model"]["num_classes"], + ).to(self.device) + except AttributeError: + self.device = self.config["server_device"] + return ResNet( + num_head_layers=self.config["model"]["num_head_layers"], + num_classes=self.config["model"]["num_classes"], + ).to(self.device) + + def train( + self, + epochs: int = 1, + ) -> Dict[str, Union[List[Dict[str, float]], int, float]]: + """Train the model maintained in self.model. + + Method adapted from simple MobileNet-v1 (PyTorch) \ + https://github.com/wjc852456/pytorch-mobilenet-v1. + + Args: + epochs: number of training epochs. + + Returns + ------- + Dict containing the train metrics. + """ + # Load client state (head) if client_save_path is not None and it is not empty + if self.client_save_path is not None: + try: + self.model.head.load_state_dict(torch.load(self.client_save_path)) + except FileNotFoundError: + print("No client state found, training from scratch.") + pass + + criterion = torch.nn.CrossEntropyLoss() + optimizer = torch.optim.SGD( + self.model.parameters(), lr=self.learning_rate, momentum=0.9 + ) + correct, total = 0, 0 + loss: torch.Tensor = 0.0 + # self.model.train() + for _ in range(epochs): + for images, labels in self.trainloader: + optimizer.zero_grad() + outputs = self.model(images.to(self.device)) + labels = labels.to(self.device) + loss = criterion(outputs, labels) + loss.backward() + + optimizer.step() + total += labels.size(0) + correct += (torch.max(outputs.data, 1)[1] == labels).sum().item() + + # Save client state (head) + if self.client_save_path is not None: + torch.save(self.model.head.state_dict(), self.client_save_path) + + return {"loss": loss.item(), "accuracy": correct / total} + + def test( + self, + ) -> Dict[str, float]: + """Test the model maintained in self.model.""" + # Load client state (head) + if self.client_save_path is not None: + self.model.head.load_state_dict(torch.load(self.client_save_path)) + + criterion = torch.nn.CrossEntropyLoss() + correct, total, loss = 0, 0, 0.0 + # self.model.eval() + with torch.no_grad(): + for images, labels in self.testloader: + outputs = self.model(images.to(self.device)) + labels = labels.to(self.device) + loss += criterion(outputs, labels).item() + total += labels.size(0) + correct += (torch.max(outputs.data, 1)[1] == labels).sum().item() + print("Test Accuracy: {:.4f}".format(correct / total)) + + if self.client_save_path is not None: + torch.save(self.model.head.state_dict(), self.client_save_path) + + return { + "loss": loss / len(self.testloader.dataset), + "accuracy": correct / total, + } + + def train_dataset_size(self) -> int: + """Return train data set size.""" + return len(self.trainloader) + + def test_dataset_size(self) -> int: + """Return test data set size.""" + return len(self.testloader) + + def total_dataset_size(self) -> int: + """Return total data set size.""" + return len(self.trainloader) + len(self.testloader) diff --git a/baselines/fedper/fedper/main.py b/baselines/fedper/fedper/main.py new file mode 100644 index 000000000000..b421b2e0442c --- /dev/null +++ b/baselines/fedper/fedper/main.py @@ -0,0 +1,126 @@ +"""Create and connect the building blocks for your experiments; start the simulation. + +It includes processioning the dataset, instantiate strategy, specify how the global +model is going to be evaluated, etc. At the end, this script saves the results. +""" + +from pathlib import Path + +import flwr as fl +import hydra +from hydra.core.hydra_config import HydraConfig +from hydra.utils import instantiate +from omegaconf import DictConfig, OmegaConf + +from fedper.dataset import dataset_main +from fedper.utils import ( + get_client_fn, + get_create_model_fn, + plot_metric_from_history, + save_results_as_pickle, + set_client_state_save_path, + set_model_class, + set_num_classes, + set_server_target, +) + + +@hydra.main(config_path="conf", config_name="base", version_base=None) +def main(cfg: DictConfig) -> None: + """Run the baseline. + + Parameters + ---------- + cfg : DictConfig + An omegaconf object that stores the hydra config. + """ + # 1. Print parsed config + # Set the model class, server target, and number of classes + cfg = set_model_class(cfg) + cfg = set_server_target(cfg) + cfg = set_num_classes(cfg) + + print(OmegaConf.to_yaml(cfg)) + + # Create directory to store client states if it does not exist + # Client state has subdirectories with the name of current time + client_state_save_path = set_client_state_save_path() + + # 2. Prepare your dataset + dataset_main(cfg.dataset) + + # 3. Define your clients + # Get client function + client_fn = get_client_fn( + config=cfg, + client_state_save_path=client_state_save_path, + ) + + # get a function that will be used to construct the config that the client's + # fit() method will received + def get_on_fit_config(): + def fit_config_fn(server_round: int): + # resolve and convert to python dict + fit_config = OmegaConf.to_container(cfg.fit_config, resolve=True) + _ = server_round + return fit_config + + return fit_config_fn + + # get a function that will be used to construct the model + create_model, split = get_create_model_fn(cfg) + + # 4. Define your strategy + strategy = instantiate( + cfg.strategy, + create_model=create_model, + on_fit_config_fn=get_on_fit_config(), + model_split_class=split, + ) + + # 5. Start Simulation + history = fl.simulation.start_simulation( + client_fn=client_fn, + num_clients=cfg.num_clients, + config=fl.server.ServerConfig(num_rounds=cfg.num_rounds), + client_resources={ + "num_cpus": cfg.client_resources.num_cpus, + "num_gpus": cfg.client_resources.num_gpus, + }, + strategy=strategy, + ) + + # Experiment completed. Now we save the results and + # generate plots using the `history` + print("................") + print(history) + + # 6. Save your results + save_path = Path(HydraConfig.get().runtime.output_dir) + + # save results as a Python pickle using a file_path + # the directory created by Hydra for each run + save_results_as_pickle( + history, + file_path=save_path, + ) + # plot results and include them in the readme + strategy_name = strategy.__class__.__name__ + file_suffix: str = ( + f"_{strategy_name}" + f"_C={cfg.num_clients}" + f"_B={cfg.batch_size}" + f"_E={cfg.num_epochs}" + f"_R={cfg.num_rounds}" + f"_lr={cfg.learning_rate}" + ) + + plot_metric_from_history( + history, + save_path, + (file_suffix), + ) + + +if __name__ == "__main__": + main() diff --git a/baselines/fedper/fedper/models.py b/baselines/fedper/fedper/models.py new file mode 100644 index 000000000000..2a2ebde158f8 --- /dev/null +++ b/baselines/fedper/fedper/models.py @@ -0,0 +1,189 @@ +"""Abstract class for splitting a model into body and head.""" +from abc import ABC, abstractmethod +from collections import OrderedDict +from typing import Any, Dict, List, Tuple, Type, Union + +import numpy as np +from omegaconf import DictConfig +from torch import Tensor +from torch import nn as nn + + +class ModelSplit(ABC, nn.Module): + """Abstract class for splitting a model into body and head.""" + + def __init__( + self, + model: nn.Module, + ): + """Initialize the attributes of the model split. + + Args: + model: dict containing the vocab sizes of the input attributes. + """ + super().__init__() + + self._body, self._head = self._get_model_parts(model) + + @abstractmethod + def _get_model_parts(self, model: nn.Module) -> Tuple[nn.Module, nn.Module]: + """Return the body and head of the model. + + Args: + model: model to be split into head and body + + Returns + ------- + Tuple where the first element is the body of the model + and the second is the head. + """ + + @property + def body(self) -> nn.Module: + """Return model body.""" + return self._body + + @body.setter + def body(self, state_dict: "OrderedDict[str, Tensor]") -> None: + """Set model body. + + Args: + state_dict: dictionary of the state to set the model body to. + """ + self.body.load_state_dict(state_dict, strict=True) + + @property + def head(self) -> nn.Module: + """Return model head.""" + return self._head + + @head.setter + def head(self, state_dict: "OrderedDict[str, Tensor]") -> None: + """Set model head. + + Args: + state_dict: dictionary of the state to set the model head to. + """ + self.head.load_state_dict(state_dict, strict=True) + + def get_parameters(self) -> List[np.ndarray]: + """Get model parameters (without fixed head). + + Returns + ------- + Body and head parameters + """ + return [ + val.cpu().numpy() + for val in [ + *self.body.state_dict().values(), + *self.head.state_dict().values(), + ] + ] + + def set_parameters(self, state_dict: Dict[str, Tensor]) -> None: + """Set model parameters. + + Args: + state_dict: dictionary of the state to set the model to. + """ + ordered_state_dict = OrderedDict(self.state_dict().copy()) + # Update with the values of the state_dict + ordered_state_dict.update(dict(state_dict.items())) + self.load_state_dict(ordered_state_dict, strict=False) + + def enable_head(self) -> None: + """Enable gradient tracking for the head parameters.""" + for param in self.head.parameters(): + param.requires_grad = True + + def enable_body(self) -> None: + """Enable gradient tracking for the body parameters.""" + for param in self.body.parameters(): + param.requires_grad = True + + def disable_head(self) -> None: + """Disable gradient tracking for the head parameters.""" + for param in self.head.parameters(): + param.requires_grad = False + + def disable_body(self) -> None: + """Disable gradient tracking for the body parameters.""" + for param in self.body.parameters(): + param.requires_grad = False + + def forward(self, inputs: Any) -> Any: + """Forward inputs through the body and the head.""" + x = self.body(inputs) + return self.head(x) + + +class ModelManager(ABC): + """Manager for models with Body/Head split.""" + + def __init__( + self, + client_id: int, + config: DictConfig, + model_split_class: Type[Any], # ModelSplit + ): + """Initialize the attributes of the model manager. + + Args: + client_id: The id of the client. + config: Dict containing the configurations to be used by the manager. + model_split_class: Class to be used to split the model into body and head\ + (concrete implementation of ModelSplit). + """ + super().__init__() + + self.client_id = client_id + self.config = config + self._model = model_split_class(self._create_model()) + + @abstractmethod + def _create_model(self) -> nn.Module: + """Return model to be splitted into head and body.""" + + @abstractmethod + def train( + self, + epochs: int = 1, + ) -> Dict[str, Union[List[Dict[str, float]], int, float]]: + """Train the model maintained in self.model. + + Args: + epochs: number of training epochs. + + Returns + ------- + Dict containing the train metrics. + """ + + @abstractmethod + def test( + self, + ) -> Dict[str, float]: + """Test the model maintained in self.model. + + Returns + ------- + Dict containing the test metrics. + """ + + @abstractmethod + def train_dataset_size(self) -> int: + """Return train data set size.""" + + @abstractmethod + def test_dataset_size(self) -> int: + """Return test data set size.""" + + @abstractmethod + def total_dataset_size(self) -> int: + """Return total data set size.""" + + @property + def model(self) -> nn.Module: + """Return model.""" + return self._model diff --git a/baselines/fedper/fedper/run_figures.sh b/baselines/fedper/fedper/run_figures.sh new file mode 100755 index 000000000000..9f7382412465 --- /dev/null +++ b/baselines/fedper/fedper/run_figures.sh @@ -0,0 +1,36 @@ +#!/bin/bash + +# CIFAR10 Mobile and Resnet (non-iid n classes (FIGURE 2a&b)) +for model in mobile resnet +do + for num_classes in 4 8 10 + do + for algorithm in fedper fedavg + do + python -m fedper.main --config-path conf --config-name cifar10 dataset.num_classes=${num_classes} model_name=${model} algorithm=${algorithm} + done + done +done + + +# CIFAR10 Mobile (n head layers (FIGURE 4a)) +for num_head_layers in 2 3 4 +do + python -m fedper.main --config-path conf --config-name cifar10 dataset.num_classes=4 model.num_head_layers=${num_head_layers} num_rounds=25 model_name=mobile algorithm=fedper +done +python -m fedper.main --config-path conf --config-name cifar10 num_rounds=25 model_name=mobile dataset.num_classes=4 + +# CIFAR10 Resnet (n head layers (FIGURE 4b)) +for num_head_layers in 1 2 3 +do + python -m fedper.main --config-path conf --config-name cifar10 dataset.num_classes=4 model.num_head_layers=${num_head_layers} num_rounds=25 model_name=resnet algorithm=fedper +done +python -m fedper.main --config-path conf --config-name cifar10 num_rounds=25 model_name=resnet dataset.num_classes=4 + +# FLICKR +for model in mobile resnet +do + python -m fedper.main --config-path conf --config-name flickr model.num_head_layers=2 model_name=${model} algorithm=fedper num_rounds=35 + python -m fedper.main --config-path conf --config-name flickr model_name=${model} algorithm=fedavg num_rounds=35 +done + diff --git a/baselines/fedper/fedper/server.py b/baselines/fedper/fedper/server.py new file mode 100644 index 000000000000..93616f50f45a --- /dev/null +++ b/baselines/fedper/fedper/server.py @@ -0,0 +1,24 @@ +"""Server strategies pipelines for FedPer.""" +from flwr.server.strategy.fedavg import FedAvg + +from fedper.strategy import ( + AggregateBodyStrategy, + AggregateFullStrategy, + ServerInitializationStrategy, +) + + +class InitializationStrategyPipeline(ServerInitializationStrategy): + """Initialization strategy pipeline.""" + + +class AggregateBodyStrategyPipeline( + InitializationStrategyPipeline, AggregateBodyStrategy, FedAvg +): + """Aggregate body strategy pipeline.""" + + +class DefaultStrategyPipeline( + InitializationStrategyPipeline, AggregateFullStrategy, FedAvg +): + """Default strategy pipeline.""" diff --git a/baselines/fedper/fedper/strategy.py b/baselines/fedper/fedper/strategy.py new file mode 100644 index 000000000000..5ae55086db2f --- /dev/null +++ b/baselines/fedper/fedper/strategy.py @@ -0,0 +1,437 @@ +"""FL server strategies.""" +from collections import OrderedDict +from pathlib import Path +from typing import Any, Callable, Dict, List, Optional, Tuple, Type, Union + +import torch +from flwr.common import ( + EvaluateIns, + EvaluateRes, + FitIns, + FitRes, + NDArrays, + Parameters, + Scalar, + ndarrays_to_parameters, + parameters_to_ndarrays, +) +from flwr.server.client_manager import ClientManager +from flwr.server.client_proxy import ClientProxy +from flwr.server.strategy.fedavg import FedAvg +from torch import nn as nn + +from fedper.constants import Algorithms +from fedper.implemented_models.mobile_model import MobileNetModelSplit +from fedper.implemented_models.resnet_model import ResNetModelSplit +from fedper.models import ModelSplit + + +class ServerInitializationStrategy(FedAvg): + """Server FL Parameter Initialization strategy implementation.""" + + def __init__( + self, + *args: Any, + model_split_class: Union[ + Type[MobileNetModelSplit], Type[ModelSplit], Type[ResNetModelSplit] + ], + create_model: Callable[[], nn.Module], + initial_parameters: Optional[Parameters] = None, + on_fit_config_fn: Optional[Callable[[int], Dict[str, Any]]] = None, + evaluate_fn: Optional[ + Callable[ + [int, NDArrays, Dict[str, Scalar]], + Optional[Tuple[float, Dict[str, Scalar]]], + ] + ] = None, + min_available_clients: int = 1, + min_evaluate_clients: int = 1, + min_fit_clients: int = 1, + algorithm: str = Algorithms.FEDPER.value, + **kwargs: Any, + ) -> None: + super().__init__(*args, **kwargs) + _ = evaluate_fn + self.on_fit_config_fn = on_fit_config_fn + self.initial_parameters = initial_parameters + self.min_available_clients = min_available_clients + self.min_evaluate_clients = min_evaluate_clients + self.min_fit_clients = min_fit_clients + self.algorithm = algorithm + self.model = model_split_class(model=create_model()) + + def initialize_parameters( + self, client_manager: ClientManager + ) -> Optional[Parameters]: + """Initialize the (global) model parameters. + + Args: + client_manager: ClientManager. The client manager which holds all currently + connected clients. + + Returns + ------- + If parameters are returned, then the server will treat these as the + initial global model parameters. + """ + initial_parameters: Optional[Parameters] = self.initial_parameters + self.initial_parameters = None # Don't keep initial parameters in memory + if initial_parameters is None and self.model is not None: + if self.algorithm == Algorithms.FEDPER.value: + initial_parameters_use = [ + val.cpu().numpy() for _, val in self.model.body.state_dict().items() + ] + else: # FedAvg + initial_parameters_use = [ + val.cpu().numpy() for _, val in self.model.state_dict().items() + ] + + if isinstance(initial_parameters_use, list): + initial_parameters = ndarrays_to_parameters(initial_parameters_use) + return initial_parameters + + +class AggregateFullStrategy(ServerInitializationStrategy): + """Full model aggregation strategy implementation.""" + + def __init__(self, *args, save_path: Path = Path(""), **kwargs) -> None: + super().__init__(*args, **kwargs) + self.save_path = save_path if save_path != "" else None + if save_path is not None: + self.save_path = save_path / "models" + self.save_path.mkdir(parents=True, exist_ok=True) + + def configure_evaluate( + self, server_round: int, parameters: Parameters, client_manager: ClientManager + ) -> List[Tuple[ClientProxy, EvaluateIns]]: + """Configure the next round of evaluation. + + Args: + server_round: The current round of federated learning. + parameters: The current (global) model parameters. + client_manager: The client manager which holds all currently + connected clients. + + Returns + ------- + A list of tuples. Each tuple in the list identifies a `ClientProxy` and the + `EvaluateIns` for this particular `ClientProxy`. If a particular + `ClientProxy` is not included in this list, it means that this + `ClientProxy` will not participate in the next round of federated + evaluation. + """ + # Same as superclass method but adds the head + + # Parameters and config + config: Dict[Any, Any] = {} + + weights = parameters_to_ndarrays(parameters) + + parameters = ndarrays_to_parameters(weights) + + evaluate_ins = EvaluateIns(parameters, config) + + # Sample clients + if server_round >= 0: + # Sample clients + sample_size, min_num_clients = self.num_evaluation_clients( + client_manager.num_available() + ) + clients = client_manager.sample( + num_clients=sample_size, + min_num_clients=min_num_clients, + ) + else: + clients = list(client_manager.all().values()) + + # Return client/config pairs + return [(client, evaluate_ins) for client in clients] + + def aggregate_fit( + self, + server_round: int, + results: List[Tuple[ClientProxy, FitRes]], + failures: List[Union[Tuple[ClientProxy, FitRes], BaseException]], + ) -> Tuple[Optional[Parameters], Dict[str, Scalar]]: + """Aggregate received local parameters, set global model parameters and save. + + Args: + server_round: The current round of federated learning. + results: Successful updates from the previously selected and configured + clients. Each pair of `(ClientProxy, FitRes)` constitutes a + successful update from one of the previously selected clients. Not + that not all previously selected clients are necessarily included in + this list: a client might drop out and not submit a result. For each + client that did not submit an update, there should be an `Exception` + in `failures`. + failures: Exceptions that occurred while the server was waiting for client + updates. + + Returns + ------- + If parameters are returned, then the server will treat these as the + new global model parameters (i.e., it will replace the previous + parameters with the ones returned from this method). If `None` is + returned (e.g., because there were only failures and no viable + results) then the server will no update the previous model + parameters, the updates received in this round are discarded, and + the global model parameters remain the same. + """ + agg_params, agg_metrics = super().aggregate_fit( + server_round=server_round, results=results, failures=failures + ) + if agg_params is not None: + # Update Server Model + parameters = parameters_to_ndarrays(agg_params) + model_keys = [ + k + for k in self.model.state_dict().keys() + if k.startswith("_body") or k.startswith("_head") + ] + params_dict = zip(model_keys, parameters) + state_dict = OrderedDict({k: torch.tensor(v) for k, v in params_dict}) + self.model.set_parameters(state_dict) + + if self.save_path is not None: + # Save Model + torch.save(self.model, self.save_path / f"model-ep_{server_round}.pt") + + return agg_params, agg_metrics + + def aggregate_evaluate( + self, + server_round: int, + results: List[Tuple[ClientProxy, EvaluateRes]], + failures: List[Union[Tuple[ClientProxy, EvaluateRes], BaseException]], + ) -> Tuple[Optional[float], Dict[str, Scalar]]: + """Aggregate the received local parameters and store the test aggregated. + + Args: + server_round: The current round of federated learning. + results: Successful updates from the + previously selected and configured clients. Each pair of + `(ClientProxy, FitRes` constitutes a successful update from one of the + previously selected clients. Not that not all previously selected + clients are necessarily included in this list: a client might drop out + and not submit a result. For each client that did not submit an update, + there should be an `Exception` in `failures`. + failures: Exceptions that occurred while the server + was waiting for client updates. + + Returns + ------- + Optional `float` representing the aggregated evaluation result. Aggregation + typically uses some variant of a weighted average. + """ + aggregated_loss, aggregated_metrics = super().aggregate_evaluate( + server_round=server_round, results=results, failures=failures + ) + _ = aggregated_metrics # Avoid unused variable warning + + # Weigh accuracy of each client by number of examples used + accuracies: List[float] = [] + for _, res in results: + accuracy: float = float(res.metrics["accuracy"]) + accuracies.append(accuracy) + print(f"Round {server_round} accuracies: {accuracies}") + + # Aggregate and print custom metric + averaged_accuracy = sum(accuracies) / len(accuracies) + print(f"Round {server_round} accuracy averaged: {averaged_accuracy}") + return aggregated_loss, {"accuracy": averaged_accuracy} + + +class AggregateBodyStrategy(ServerInitializationStrategy): + """Body Aggregation strategy implementation.""" + + def __init__(self, *args, save_path: Path = Path(""), **kwargs) -> None: + super().__init__(*args, **kwargs) + self.save_path = save_path if save_path != "" else None + if save_path is not None: + self.save_path = save_path / "models" + self.save_path.mkdir(parents=True, exist_ok=True) + + def configure_fit( + self, server_round: int, parameters: Parameters, client_manager: ClientManager + ) -> List[Tuple[ClientProxy, FitIns]]: + """Configure the next round of training. + + Args: + server_round: The current round of federated learning. + parameters: The current (global) model parameters. + client_manager: The client manager which holds all + currently connected clients. + + Returns + ------- + A list of tuples. Each tuple in the list identifies a `ClientProxy` and the + `FitIns` for this particular `ClientProxy`. If a particular `ClientProxy` + is not included in this list, it means that this `ClientProxy` + will not participate in the next round of federated learning. + """ + # Same as superclass method but adds the head + + config = {} + if self.on_fit_config_fn is not None: + # Custom fit config function provided + config = self.on_fit_config_fn(server_round) + + weights = parameters_to_ndarrays(parameters) + + # Add head parameters to received body parameters + weights.extend( + [val.cpu().numpy() for _, val in self.model.head.state_dict().items()] + ) + + parameters = ndarrays_to_parameters(weights) + + fit_ins = FitIns(parameters, config) + + # Sample clients + clients = client_manager.sample( + num_clients=self.min_available_clients, min_num_clients=self.min_fit_clients + ) + + # Return client/config pairs + return [(client, fit_ins) for client in clients] + + def configure_evaluate( + self, server_round: int, parameters: Parameters, client_manager: ClientManager + ) -> List[Tuple[ClientProxy, EvaluateIns]]: + """Configure the next round of evaluation. + + Args: + server_round: The current round of federated learning. + parameters: The current (global) model parameters. + client_manager: The client manager which holds all currently + connected clients. + + Returns + ------- + A list of tuples. Each tuple in the list identifies a `ClientProxy` and the + `EvaluateIns` for this particular `ClientProxy`. If a particular + `ClientProxy` is not included in this list, it means that this + `ClientProxy` will not participate in the next round of federated + evaluation. + """ + # Same as superclass method but adds the head + + # Parameters and config + config: Dict[Any, Any] = {} + + weights = parameters_to_ndarrays(parameters) + + # Add head parameters to received body parameters + weights.extend( + [val.cpu().numpy() for _, val in self.model.head.state_dict().items()] + ) + + parameters = ndarrays_to_parameters(weights) + + evaluate_ins = EvaluateIns(parameters, config) + + # Sample clients + if server_round >= 0: + # Sample clients + sample_size, min_num_clients = self.num_evaluation_clients( + client_manager.num_available() + ) + clients = client_manager.sample( + num_clients=sample_size, + min_num_clients=min_num_clients, + ) + else: + clients = list(client_manager.all().values()) + + # Return client/config pairs + return [(client, evaluate_ins) for client in clients] + + def aggregate_fit( + self, + server_round: int, + results: List[Tuple[ClientProxy, FitRes]], + failures: List[Union[Tuple[ClientProxy, FitRes], BaseException]], + ) -> Tuple[Optional[Parameters], Dict[str, Union[bool, bytes, float, int, str]]]: + """Aggregate received local parameters, set global model parameters and save. + + Args: + server_round: The current round of federated learning. + results: Successful updates from the previously selected and configured + clients. Each pair of `(ClientProxy, FitRes)` constitutes a + successful update from one of the previously selected clients. Not + that not all previously selected clients are necessarily included in + this list: a client might drop out and not submit a result. For each + client that did not submit an update, there should be an `Exception` + in `failures`. + failures: Exceptions that occurred while the server was waiting for client + updates. + + Returns + ------- + If parameters are returned, then the server will treat these as the + new global model parameters (i.e., it will replace the previous + parameters with the ones returned from this method). If `None` is + returned (e.g., because there were only failures and no viable + results) then the server will no update the previous model + parameters, the updates received in this round are discarded, and + the global model parameters remain the same. + """ + agg_params, agg_metrics = super().aggregate_fit( + server_round=server_round, results=results, failures=failures + ) + if agg_params is not None: + parameters = parameters_to_ndarrays(agg_params) + model_keys = [ + k for k in self.model.state_dict().keys() if k.startswith("_body") + ] + params_dict = zip(model_keys, parameters) + state_dict = OrderedDict({k: torch.tensor(v) for k, v in params_dict}) + self.model.set_parameters(state_dict) + + if self.save_path is not None: + # Save Model + torch.save(self.model, self.save_path / f"model-ep_{server_round}.pt") + + return agg_params, agg_metrics + + def aggregate_evaluate( + self, + server_round: int, + results: List[Tuple[ClientProxy, EvaluateRes]], + failures: List[Union[Tuple[ClientProxy, EvaluateRes], BaseException]], + ) -> Tuple[Optional[float], Dict[str, Scalar]]: + """Aggregate the received local parameters and store the test aggregated. + + Args: + server_round: The current round of federated learning. + results: Successful updates from the + previously selected and configured clients. Each pair of + `(ClientProxy, FitRes` constitutes a successful update from one of the + previously selected clients. Not that not all previously selected + clients are necessarily included in this list: a client might drop out + and not submit a result. For each client that did not submit an update, + there should be an `Exception` in `failures`. + failures: Exceptions that occurred while the server + was waiting for client updates. + + Returns + ------- + Optional `float` representing the aggregated evaluation result. Aggregation + typically uses some variant of a weighted average. + """ + aggregated_loss, aggregated_metrics = super().aggregate_evaluate( + server_round=server_round, results=results, failures=failures + ) + _ = aggregated_metrics # Avoid unused variable warning + + # Weigh accuracy of each client by number of examples used + accuracies: List[float] = [] + for _, res in results: + accuracy: float = float(res.metrics["accuracy"]) + accuracies.append(accuracy) + print(f"Round {server_round} accuracies: {accuracies}") + + # Aggregate and print custom metric + averaged_accuracy = sum(accuracies) / len(accuracies) + print(f"Round {server_round} accuracy averaged: {averaged_accuracy}") + return aggregated_loss, {"accuracy": averaged_accuracy} diff --git a/baselines/fedper/fedper/utils.py b/baselines/fedper/fedper/utils.py new file mode 100644 index 000000000000..00b4c5318729 --- /dev/null +++ b/baselines/fedper/fedper/utils.py @@ -0,0 +1,225 @@ +"""Utility functions for FedPer.""" +import os +import pickle +import time +from pathlib import Path +from secrets import token_hex +from typing import Callable, Optional, Type, Union + +import matplotlib.pyplot as plt +import numpy as np +from flwr.server.history import History +from omegaconf import DictConfig + +from fedper.client import BaseClient, FedPerClient, get_client_fn_simulation +from fedper.implemented_models.mobile_model import MobileNet, MobileNetModelSplit +from fedper.implemented_models.resnet_model import ResNet, ResNetModelSplit + + +def set_model_class(config: DictConfig) -> DictConfig: + """Set model class based on the model name in the config file.""" + # Set the model class + if config.model_name.lower() == "resnet": + config.model["_target_"] = "fedper.implemented_models.resnet_model.ResNet" + elif config.model_name.lower() == "mobile": + config.model["_target_"] = "fedper.implemented_models.mobile_model.MobileNet" + else: + raise NotImplementedError(f"Model {config.model.name} not implemented") + return config + + +def set_num_classes(config: DictConfig) -> DictConfig: + """Set the number of classes based on the dataset name in the config file.""" + # Set the number of classes + if config.dataset.name.lower() == "cifar10": + config.model.num_classes = 10 + elif config.dataset.name.lower() == "flickr": + config.model.num_classes = 5 + # additionally for flickr + config.batch_size = 4 + config.num_clients = 30 + config.clients_per_round = 30 + else: + raise NotImplementedError(f"Dataset {config.dataset.name} not implemented") + return config + + +def set_server_target(config: DictConfig) -> DictConfig: + """Set the server target based on the algorithm in the config file.""" + # Set the server target + if config.algorithm.lower() == "fedper": + config.strategy["_target_"] = "fedper.server.AggregateBodyStrategyPipeline" + elif config.algorithm.lower() == "fedavg": + config.strategy["_target_"] = "fedper.server.DefaultStrategyPipeline" + else: + raise NotImplementedError(f"Algorithm {config.algorithm} not implemented") + return config + + +def set_client_state_save_path() -> str: + """Set the client state save path.""" + client_state_save_path = time.strftime("%Y-%m-%d") + client_state_sub_path = time.strftime("%H-%M-%S") + client_state_save_path = ( + f"./client_states/{client_state_save_path}/{client_state_sub_path}" + ) + if not os.path.exists(client_state_save_path): + os.makedirs(client_state_save_path) + return client_state_save_path + + +def get_client_fn( + config: DictConfig, client_state_save_path: str = "" +) -> Callable[[str], Union[FedPerClient, BaseClient]]: + """Get client function.""" + # Get algorithm + algorithm = config.algorithm.lower() + # Get client fn + if algorithm == "fedper": + client_fn = get_client_fn_simulation( + config=config, + client_state_save_path=client_state_save_path, + ) + elif algorithm == "fedavg": + client_fn = get_client_fn_simulation( + config=config, + ) + else: + raise NotImplementedError + return client_fn + + +def get_create_model_fn( + config: DictConfig, +) -> tuple[ + Callable[[], Union[type[MobileNet], type[ResNet]]], + Union[type[MobileNetModelSplit], type[ResNetModelSplit]], +]: + """Get create model function.""" + device = config.server_device + split: Union[ + Type[MobileNetModelSplit], Type[ResNetModelSplit] + ] = MobileNetModelSplit + if config.model_name.lower() == "mobile": + + def create_model() -> Union[Type[MobileNet], Type[ResNet]]: + """Create initial MobileNet-v1 model.""" + return MobileNet( + num_head_layers=config.model.num_head_layers, + num_classes=config.model.num_classes, + ).to(device) + + elif config.model_name.lower() == "resnet": + split = ResNetModelSplit + + def create_model() -> Union[Type[MobileNet], Type[ResNet]]: + """Create initial ResNet model.""" + return ResNet( + num_head_layers=config.model.num_head_layers, + num_classes=config.model.num_classes, + ).to(device) + + else: + raise NotImplementedError("Model not implemented, check name. ") + return create_model, split + + +def plot_metric_from_history( + hist: History, + save_plot_path: Path, + suffix: Optional[str] = "", +) -> None: + """Plot from Flower server History. + + Parameters + ---------- + hist : History + Object containing evaluation for all rounds. + save_plot_path : Path + Folder to save the plot to. + suffix: Optional[str] + Optional string to add at the end of the filename for the plot. + """ + metric_type = "distributed" + metric_dict = ( + hist.metrics_centralized + if metric_type == "centralized" + else hist.metrics_distributed + ) + _, values = zip(*metric_dict["accuracy"]) + + # let's extract decentralized loss (main metric reported in FedProx paper) + rounds_loss, values_loss = zip(*hist.losses_distributed) + + _, axs = plt.subplots(nrows=2, ncols=1, sharex="row") + axs[0].plot(np.asarray(rounds_loss), np.asarray(values_loss)) + axs[1].plot(np.asarray(rounds_loss), np.asarray(values)) + + axs[0].set_ylabel("Loss") + axs[1].set_ylabel("Accuracy") + + axs[0].grid() + axs[1].grid() + # plt.title(f"{metric_type.capitalize()} Validation - MNIST") + plt.xlabel("Rounds") + # plt.legend(loc="lower right") + + plt.savefig(Path(save_plot_path) / Path(f"{metric_type}_metrics{suffix}.png")) + plt.close() + + +def save_results_as_pickle( + history: History, + file_path: Union[str, Path], + default_filename: Optional[str] = "results.pkl", +) -> None: + """Save results from simulation to pickle. + + Parameters + ---------- + history: History + History returned by start_simulation. + file_path: Union[str, Path] + Path to file to create and store both history and extra_results. + If path is a directory, the default_filename will be used. + path doesn't exist, it will be created. If file exists, a + randomly generated suffix will be added to the file name. This + is done to avoid overwritting results. + extra_results : Optional[Dict] + A dictionary containing additional results you would like + to be saved to disk. Default: {} (an empty dictionary) + default_filename: Optional[str] + File used by default if file_path points to a directory instead + to a file. Default: "results.pkl" + """ + path = Path(file_path) + + # ensure path exists + path.mkdir(exist_ok=True, parents=True) + + def _add_random_suffix(path_: Path): + """Add a random suffix to the file name.""" + print(f"File `{path_}` exists! ") + suffix = token_hex(4) + print(f"New results to be saved with suffix: {suffix}") + return path_.parent / (path_.stem + "_" + suffix + ".pkl") + + def _complete_path_with_default_name(path_: Path): + """Append the default file name to the path.""" + print("Using default filename") + if default_filename is None: + return path_ + return path_ / default_filename + + if path.is_dir(): + path = _complete_path_with_default_name(path) + + if path.is_file(): + path = _add_random_suffix(path) + + print(f"Results will be saved into: {path}") + # data = {"history": history, **extra_results} + data = {"history": history} + # save results to pickle + with open(str(path), "wb") as handle: + pickle.dump(data, handle, protocol=pickle.HIGHEST_PROTOCOL) diff --git a/baselines/fedper/pyproject.toml b/baselines/fedper/pyproject.toml new file mode 100644 index 000000000000..efcdf25eface --- /dev/null +++ b/baselines/fedper/pyproject.toml @@ -0,0 +1,143 @@ +[build-system] +requires = ["poetry-core>=1.4.0"] +build-backend = "poetry.masonry.api" + +[tool.poetry] +name = "fedper" # <----- Ensure it matches the name of your baseline directory containing all the source code +version = "1.0.0" +description = "Federated Learning with Personalization Layers" +license = "Apache-2.0" +authors = ["The Flower Authors ", "William Lindskog "] +readme = "README.md" +homepage = "https://flower.dev" +repository = "https://github.com/adap/flower" +documentation = "https://flower.dev" +classifiers = [ + "Development Status :: 3 - Alpha", + "Intended Audience :: Developers", + "Intended Audience :: Science/Research", + "License :: OSI Approved :: Apache Software License", + "Operating System :: MacOS :: MacOS X", + "Operating System :: POSIX :: Linux", + "Programming Language :: Python", + "Programming Language :: Python :: 3", + "Programming Language :: Python :: 3 :: Only", + "Programming Language :: Python :: 3.8", + "Programming Language :: Python :: 3.9", + "Programming Language :: Python :: 3.10", + "Programming Language :: Python :: 3.11", + "Programming Language :: Python :: Implementation :: CPython", + "Topic :: Scientific/Engineering", + "Topic :: Scientific/Engineering :: Artificial Intelligence", + "Topic :: Scientific/Engineering :: Mathematics", + "Topic :: Software Development", + "Topic :: Software Development :: Libraries", + "Topic :: Software Development :: Libraries :: Python Modules", + "Typing :: Typed", +] + +[tool.poetry.dependencies] +python = ">=3.10.0, <3.11.0" # don't change this +flwr = {extras = ["simulation"], version = "1.5.0" } +hydra-core = "1.3.2" # don't change this +pandas = "^2.0.3" +matplotlib = "^3.7.2" +tqdm = "^4.66.1" +torch = { url = "https://download.pytorch.org/whl/cu117/torch-2.0.1%2Bcu117-cp310-cp310-linux_x86_64.whl"} +torchvision = { url = "https://download.pytorch.org/whl/cu117/torchvision-0.15.2%2Bcu117-cp310-cp310-linux_x86_64.whl"} + + +[tool.poetry.dev-dependencies] +isort = "==5.11.5" +black = "==23.1.0" +docformatter = "==1.5.1" +mypy = "==1.4.1" +pylint = "==2.8.2" +flake8 = "==3.9.2" +pytest = "==6.2.4" +pytest-watch = "==4.2.0" +ruff = "==0.0.272" +types-requests = "==2.27.7" + +[tool.isort] +line_length = 88 +indent = " " +multi_line_output = 3 +include_trailing_comma = true +force_grid_wrap = 0 +use_parentheses = true + +[tool.black] +line-length = 88 +target-version = ["py38", "py39", "py310", "py311"] + +[tool.pytest.ini_options] +minversion = "6.2" +addopts = "-qq" +testpaths = [ + "flwr_baselines", +] + +[tool.mypy] +ignore_missing_imports = true +strict = false +plugins = "numpy.typing.mypy_plugin" + +[tool.pylint."MESSAGES CONTROL"] +disable = "bad-continuation,duplicate-code,too-few-public-methods,useless-import-alias" +good-names = "i,j,k,_,x,y,X,Y" +signature-mutators="hydra.main.main" + +[tool.pylint."TYPECHECK"] +generated-members="numpy.*, torch.*, tensorflow.*" + +[[tool.mypy.overrides]] +module = [ + "importlib.metadata.*", + "importlib_metadata.*", +] +follow_imports = "skip" +follow_imports_for_stubs = true +disallow_untyped_calls = false + +[[tool.mypy.overrides]] +module = "torch.*" +follow_imports = "skip" +follow_imports_for_stubs = true + +[tool.docformatter] +wrap-summaries = 88 +wrap-descriptions = 88 + +[tool.ruff] +target-version = "py38" +line-length = 88 +select = ["D", "E", "F", "W", "B", "ISC", "C4"] +fixable = ["D", "E", "F", "W", "B", "ISC", "C4"] +ignore = ["B024", "B027"] +exclude = [ + ".bzr", + ".direnv", + ".eggs", + ".git", + ".hg", + ".mypy_cache", + ".nox", + ".pants.d", + ".pytype", + ".ruff_cache", + ".svn", + ".tox", + ".venv", + "__pypackages__", + "_build", + "buck-out", + "build", + "dist", + "node_modules", + "venv", + "proto", +] + +[tool.ruff.pydocstyle] +convention = "numpy" \ No newline at end of file diff --git a/baselines/fedwav2vec2/.gitignore b/baselines/fedwav2vec2/.gitignore new file mode 100644 index 000000000000..df43bf9803df --- /dev/null +++ b/baselines/fedwav2vec2/.gitignore @@ -0,0 +1,2 @@ +outputs/ +data/ \ No newline at end of file diff --git a/baselines/fedwav2vec2/LICENSE b/baselines/fedwav2vec2/LICENSE new file mode 100644 index 000000000000..d64569567334 --- /dev/null +++ b/baselines/fedwav2vec2/LICENSE @@ -0,0 +1,202 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/baselines/fedwav2vec2/README.md b/baselines/fedwav2vec2/README.md new file mode 100644 index 000000000000..0b41c6172976 --- /dev/null +++ b/baselines/fedwav2vec2/README.md @@ -0,0 +1,131 @@ +--- +title: Federated Learning for ASR based on Wav2vec2.0 +url: https://ieeexplore.ieee.org/document/10096426 +labels: [speech, asr, cross-device] +dataset: [TED-LIUM 3] +--- + +# Federated Learning for ASR Based on wav2vec 2.0 + +> Note: If you use this baseline in your work, please remember to cite the original authors of the paper as well as the Flower paper. + +**Paper:** [ieeexplore.ieee.org/document/10096426](https://ieeexplore.ieee.org/document/10096426) + +**Authors:** Tuan Nguyen, Salima Mdhaffar, Natalia Tomashenko, Jean-François Bonastre, Yannick Estève + +**Abstract:** This paper presents a study on the use of federated learning to train an ASR model based on a wav2vec 2.0 model pre-trained by self supervision. Carried out on the well-known TED-LIUM 3 dataset, our experiments show that such a model can obtain, with no use of a language model, a word error rate of 10.92% on the official TEDLIUM 3 test set, without sharing any data from the different users. We also analyse the ASR performance for speakers depending to their participation to the federated learning. Since federated learning was first introduced for privacy purposes, we also measure its ability to protect speaker identity. To do that, we exploit an approach to analyze information contained in exchanged models based on a neural network footprint on an indicator dataset. This analysis is made layer-wise and shows which layers in an exchanged wav2vec 2.0 based model bring the speaker identity information. + + +## About this baseline + +**What’s implemented:** Figure 1 in the paper. However, this baseline only provide the SSL from figure 1. However, this baseline exclusively offers the self-supervised learning (SSL) approach as depicted in Figure 1 due to it superior performance. If you wish to implement non-SSL methods yourself, you can use the provided recipe and pre-trained model by Speechbrain, available at this link: [Speechbrain Recipe for Non-SSL](https://github.com/speechbrain/speechbrain/tree/develop/recipes/CommonVoice/ASR/seq2seq). + +**Datasets:** TED-LIUM 3 dataset. It requires a 54GB download. Once extracted it is ~60 GB. You can read more about this dataset in the [TED-LIUM 3](https://arxiv.org/abs/1805.04699) paper. A more concise description of this dataset can be found in the [OpenSLR](https://www.openslr.org/51/) site. + +**Hardware Setup:** Training `wav2vec2.0` is a bit memory intensive so you'd need at least a 24GB GPU. With the current settings, each client requires ~15GB of VRAM. This suggest you could run the experiment fine on a 16GB GPU but not if you also need to pack the global model evaluation stage on the same GPU. On a single RTX 3090Ti (24GB VRAM) each round takes between 20 and 40 minutes (depending on which clients are sampled, some clients have more data than others). + +**Contributors:** [Tuan Nguyen](https://www.linkedin.com/in/manh-tuan-nguyen-595898203) + +## Experimental Setup + +**Task:** Automatic Speech Recognition (ASR) + +**Model:** Wav2vec2.0-large [from Huggingface](https://huggingface.co/facebook/wav2vec2-large-lv60) totalling 317M parameters. Read more in the [wav2vec2.0 paper](https://arxiv.org/abs/2006.11477). + + +**Dataset:** In this paper, we divided the training dataset of TED-LIUM 3 into 1943 clients, where each of them is represented by a speaker from TED-LIUM 3. The clients are ordered by CID, with `client_0` having the largest amount of speech hours and `client_1943` having the smallest. Each client's data will be divided into training, development, and test sets with an 80-10-10 ratio. For client who has more than 10 minutes, we extract 5 minutes from their training set for analysis purposes. This portion will not be used during training or in any part of this baseline. For clients with duration less than 10 minutes, all the speaker data will represent the local dataset for the client. The full structure breakdown is below: +```bash +├── data +│ ├── client_{cid} +│ │ ├── ted_train.csv +│ │ ├── ted_dev.csv +│ │ ├── ted_test.csv +│ │ ├── ted_train_full5.csv {Analysis dataset contains only 5m from ted_train.csv} +│ │ ├── ted_train_wo5.csv {the training file for client who has more than 10m} +│ ├── server +│ │ ├── ted_train.csv {all TED-LIUM 3 train set} +│ │ ├── ted_dev.csv {all TED-LIUM 3 valid set} +│ │ ├── ted_test.csv {all TED-LIUM 3 test set} + +``` +For more details, please refer to the relevant section in the paper. + +**Training Hyperparameters:** +| Hyperparameter | Default Value | Description | +| ------- | ----- | ------- | +| `pre_train_model_path` | `null` | Path to pre-trained model or checkpoint. The best checkpoint could be found [here](https://github.com/tuanct1997/Federated-Learning-ASR-based-on-wav2vec-2.0/tree/main/material/pre-trained) | +| `save_checkpoint` | `null` | Path to folder where server model will be saved at each round | +| `label_path` | `docs/pretrained_wav2vec2` | Label each character for every client to ensure consistency during training phase| +| `sb_config` | `fedwav2vec2/conf/sb_config/w2v2.yaml` | Speechbrain config file for architecture model. Please refer to [SpeechBrain](https://github.com/speechbrain/speechbrain) for more information | +| `rounds` | `100` | Indicate the number of Federated Learning (FL) rounds| +| `local_epochs` | `20` | Specify the number of training epochs at the client side | +| `total_clients` | `1943` | Size of client pool, with a maxium set at 1943 clients| +| `server_cid` | `19999` | ID of the server to distinguish from the client's ID | +| `server_device` | `cuda` | You can choose between `cpu` or `cuda` for centralised evaluation, but it is recommended to use `cuda`| +| `parallel_backend` | `false` | Multi-gpus training. Only active if you have more than 1 gpu per client | +| `strategy.min_fit_client` | `20` | Number of clients involve per round. Default is 20 as indicated in the paper | +| `strategy.fraction_fit` | `0.01` | Ratio of client pool to involve during training | +| `strategy.weight_strategy` | `num`| Different way to average clients weight. Could be chose between `num`,`loss`,`wer` | +| `client_resources.num_cpus` | `8`| Number of cpus per client. Recommended to have more than 8 | +| `client_resources.num_gpus` | `1`| Number of gpus per client. Recommended to have at least 1 with VRAM > 24GB | + + +By default, long audio sequences (>10s) are excluded from training. This is done so to keep the VRAM usage low enough to train a client on a 16GB GPU. This hyperparameter is defined in the `sb_config` under the `avoid_if_longer_than` tag. + +## Environment Setup + +Once you have installed `pyenv` and `poetry`, run the commands below to setup your python environment: + +```bash +# Set a recent version of Python for your environment +pyenv local 3.10.6 +poetry env use 3.10.6 + +# Install your environment +poetry install + +# Activate your environment +poetry shell +``` + +When you run this baseline for the first time, you need first to download the data-to-client mapping files as well as the `TED-LIUM-3`` dataset. + +```bash +# Then create a directory using the same name as you'll use for `dada_dir` in your config (see conf/base.yaml) +mkdir data + +# Clone client mapping (note content will be moved to your data dir) +git clone https://github.com/tuanct1997/Federated-Learning-ASR-based-on-wav2vec-2.0.git _temp && mv _temp/data/* data/ && rm -rf _temp + +# Download dataset, extract and prepare dataset partitions +# This might take a while depending on your internet connection +python -m fedwav2vec2.dataset_preparation +``` + + +## Running the Experiments + +```bash +# Run with default arguments (one client per GPU) +python -m fedwav2vec2.main + +# if you have a large GPU (32GB+) you migth want to fit two per GPU +python -m fedwav2vec2.main client_resources.num_gpus=0.5 + +# the global model can be saved at the end of each round if you specify a checkpoint path +python -m fedwav2vec2.main save_checkpoint= # if directory doesn't exist, it will be created + +# then you can use it as the starting point for your global model like so: +python -m fedwav2vec2.main pre_train_model_path=/last_checkpoint.pt +``` + +When running the experiment, a structure of directories `/