-
Notifications
You must be signed in to change notification settings - Fork 948
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge branch 'main' into licensecheck
- Loading branch information
Showing
13 changed files
with
465 additions
and
52 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -7,23 +7,31 @@ The goal of Flower Baselines is to reproduce experiments from popular papers to | |
|
||
Before you start to work on a new baseline or experiment, please check the `Flower Issues <https://github.com/adap/flower/issues>`_ or `Flower Pull Requests <https://github.com/adap/flower/pulls>`_ to see if someone else is already working on it. Please open a new issue if you are planning to work on a new baseline or experiment with a short description of the corresponding paper and the experiment you want to contribute. | ||
|
||
TL;DR: Add a new Flower Baseline | ||
-------------------------------- | ||
.. warning:: | ||
We are in the process of changing how Flower Baselines are structured and updating the instructions for new contributors. Bear with us until we have finalised this transition. For now, follow the steps described below and reach out to us if something is not clear. We look forward to welcoming your baseline into Flower!! | ||
Requirements | ||
------------ | ||
|
||
Contributing a new baseline is really easy. You only have to make sure that your federated learning experiments are running with Flower and replicate the results of a paper. Flower baselines need to make use of: | ||
|
||
* `Poetry <https://python-poetry.org/docs/>`_ to manage the Python environment. | ||
* `Hydra <https://hydra.cc/>`_ to manage the configuration files for your experiments. | ||
|
||
You can find more information about how to setup Poetry in your machine in the ``EXTENDED_README.md`` that is generated when you prepare your baseline. | ||
|
||
Add a new Flower Baseline | ||
------------------------- | ||
.. note:: | ||
For a detailed set of steps to follow, check the `Baselines README on GitHub <https://github.com/adap/flower/tree/main/baselines>`_. | ||
The instructions below are a more verbose version of what's present in the `Baselines README on GitHub <https://github.com/adap/flower/tree/main/baselines>`_. | ||
|
||
Let's say you want to contribute the code of your most recent Federated Learning publication, *FedAwesome*. There are only three steps necessary to create a new *FedAwesome* Flower Baseline: | ||
|
||
#. **Get the Flower source code on your machine** | ||
#. Fork the Flower codebase: go to the `Flower GitHub repo <https://github.com/adap/flower>`_ and fork the code (click the *Fork* button in the top-right corner and follow the instructions) | ||
#. Clone the (forked) Flower source code: :code:`git clone [email protected]:[your_github_username]/flower.git` | ||
#. Open the code in your favorite editor. | ||
#. **Create a directory for your baseline and add the FedAwesome code** | ||
#. **Use the provided script to create your baseline directory** | ||
#. Navigate to the baselines directory and run :code:`./dev/create-baseline.sh fedawesome` | ||
#. A new directory in :code:`baselines/fedawesome` is created. | ||
#. Follow the instructions in :code:`EXTENDED_README.md` and :code:`README.md` in :code:`baselines/fedawesome/`. | ||
#. Follow the instructions in :code:`EXTENDED_README.md` and :code:`README.md` in your baseline directory. | ||
#. **Open a pull request** | ||
#. Stage your changes: :code:`git add .` | ||
#. Commit & push: :code:`git commit -m "Create new FedAwesome baseline" ; git push` | ||
|
@@ -36,18 +44,20 @@ Further reading: | |
* `GitHub docs: Creating a pull request <https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request>`_ | ||
* `GitHub docs: Creating a pull request from a fork <https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request-from-a-fork>`_ | ||
|
||
Requirements | ||
------------ | ||
|
||
Contributing a new baseline is really easy. You only have to make sure that your federated learning experiments are running with Flower and replicate the results of a paper. | ||
|
||
The only requirement you need in your system in order to create a baseline is to have `Poetry <https://python-poetry.org/docs/>`_ installed. This is our package manager tool of choice. | ||
|
||
We are adopting `Hydra <https://hydra.cc/>`_ as the default mechanism to manage everything related to config files and the parameterisation of the Flower baseline. | ||
|
||
Usability | ||
--------- | ||
|
||
Flower is known and loved for its usability. Therefore, make sure that your baseline or experiment can be executed with a single command such as :code:`conda run -m <your-baseline>.main` or :code:`python main.py` (when sourced into your environment). We provide you with a `template-baseline <https://github.com/adap/flower/tree/main/baselines/baseline_template>`_ to use as guidance when contributing your baseline. Having all baselines follow a homogenous structure helps users to tryout many baselines without the overheads of having to understand each individual codebase. Similarly, by using Hydra throughout, users will immediately know how to parameterise your experiments directly from the command line. | ||
Flower is known and loved for its usability. Therefore, make sure that your baseline or experiment can be executed with a single command such as: | ||
|
||
.. code-block:: bash | ||
poetry run python -m <your-baseline>.main | ||
# or, once sourced into your environment | ||
python -m <your-baseline>.main | ||
We provide you with a `template-baseline <https://github.com/adap/flower/tree/main/baselines/baseline_template>`_ to use as guidance when contributing your baseline. Having all baselines follow a homogenous structure helps users to tryout many baselines without the overheads of having to understand each individual codebase. Similarly, by using Hydra throughout, users will immediately know how to parameterise your experiments directly from the command line. | ||
|
||
We look forward to your contribution! |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -2,18 +2,18 @@ | |
title: Multi-Level Branched Regularization for Federated Learning | ||
url: https://proceedings.mlr.press/v162/kim22a.html | ||
labels: [data heterogeneity, knowledge distillation, image classification] | ||
dataset: [cifar100, tiny-imagenet] | ||
dataset: [CIFAR-100, Tiny-ImageNet] | ||
--- | ||
|
||
# *_FedMLB_* | ||
# FedMLB: Multi-Level Branched Regularization for Federated Learning | ||
|
||
> Note: If you use this baseline in your work, please remember to cite the original authors of the paper as well as the Flower paper. | ||
****Paper:**** [proceedings.mlr.press/v162/kim22a.html](https://proceedings.mlr.press/v162/kim22a.html) | ||
**Paper:** [proceedings.mlr.press/v162/kim22a.html](https://proceedings.mlr.press/v162/kim22a.html) | ||
|
||
****Authors:**** Jinkyu Kim, Geeho Kim, Bohyung Han | ||
**Authors:** Jinkyu Kim, Geeho Kim, Bohyung Han | ||
|
||
****Abstract:**** *_A critical challenge of federated learning is data | ||
**Abstract:** *_A critical challenge of federated learning is data | ||
heterogeneity and imbalance across clients, which | ||
leads to inconsistency between local networks and | ||
unstable convergence of global models. To alleviate | ||
|
@@ -37,40 +37,40 @@ The source code is available in our project page._* | |
|
||
## About this baseline | ||
|
||
****What’s implemented:**** The code in this directory reproduces the results for FedMLB, FedAvg, and FedAvg+KD. | ||
**What’s implemented:** The code in this directory reproduces the results for FedMLB, FedAvg, and FedAvg+KD. | ||
The reproduced results use the CIFAR-100 dataset or the TinyImagenet dataset. Four settings are available for both | ||
the datasets, | ||
1. Moderate-scale with Dir(0.3), 100 clients, 5% participation, balanced dataset. | ||
2. Large-scale experiments with Dir(0.3), 500 clients, 2% participation rate, balanced dataset. | ||
3. Moderate-scale with Dir(0.6), 100 clients, 5% participation rate, balanced dataset. | ||
4. Large-scale experiments with Dir(0.6), 500 clients, 2% participation rate, balanced dataset. | ||
|
||
****Datasets:**** CIFAR-100, Tiny-ImageNet. | ||
**Datasets:** CIFAR-100, Tiny-ImageNet. | ||
|
||
****Hardware Setup:**** The code in this repository has been tested on a Linux machine with 64GB RAM. | ||
**Hardware Setup:** The code in this repository has been tested on a Linux machine with 64GB RAM. | ||
Be aware that in the default config the memory usage can exceed 10GB. | ||
|
||
****Contributors:**** Alessio Mora (University of Bologna, PhD, [email protected]). | ||
**Contributors:** Alessio Mora (University of Bologna, PhD, [email protected]). | ||
|
||
## Experimental Setup | ||
|
||
****Task:**** Image classification | ||
**Task:** Image classification | ||
|
||
****Model:**** ResNet-18. | ||
**Model:** ResNet-18. | ||
|
||
****Dataset:**** Four settings are available for CIFAR-100, | ||
**Dataset:** Four settings are available for CIFAR-100, | ||
1. Moderate-scale with Dir(0.3), 100 clients, 5% participation, balanced dataset (500 examples per client). | ||
2. Large-scale experiments with Dir(0.3), 500 clients, 2% participation rate, balanced dataset (100 examples per client). | ||
3. Moderate-scale with Dir(0.6), 100 clients, 5% participation rate, balanced dataset (500 examples per client). | ||
4. Large-scale experiments with Dir(0.6), 500 clients, 2% participation rate, balanced dataset (100 examples per client). | ||
|
||
****Dataset:**** Four settings are available for Tiny-Imagenet, | ||
**Dataset:** Four settings are available for Tiny-Imagenet, | ||
1. Moderate-scale with Dir(0.3), 100 clients, 5% participation, balanced dataset (1000 examples per client). | ||
2. Large-scale experiments with Dir(0.3), 500 clients, 2% participation rate, balanced dataset (200 examples per client). | ||
3. Moderate-scale with Dir(0.6), 100 clients, 5% participation rate, balanced dataset (1000 examples per client). | ||
4. Large-scale experiments with Dir(0.6), 500 clients, 2% participation rate, balanced dataset (200 examples per client). | ||
|
||
****Training Hyperparameters:**** | ||
**Training Hyperparameters:** | ||
|
||
| Hyperparameter | Description | Default Value | | ||
| ------------- | ------------- | ------------- | | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,20 @@ | ||
# Copyright 2023 Flower Labs GmbH. All Rights Reserved. | ||
# | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
# ============================================================================== | ||
"""Common components in Flower Datasets.""" | ||
|
||
|
||
from .typing import Resplitter | ||
|
||
__all__ = ["Resplitter"] |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,22 @@ | ||
# Copyright 2023 Flower Labs GmbH. All Rights Reserved. | ||
# | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
# ============================================================================== | ||
"""Flower Datasets type definitions.""" | ||
|
||
|
||
from typing import Callable | ||
|
||
from datasets import DatasetDict | ||
|
||
Resplitter = Callable[[DatasetDict], DatasetDict] |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.