From 3ec5ee836bd16b024066b912cfd901e9dd198d46 Mon Sep 17 00:00:00 2001 From: Javier Date: Mon, 16 Oct 2023 10:21:43 +0100 Subject: [PATCH] Update FedMLB baseline README (#2507) --- baselines/fedmlb/README.md | 28 ++++++++++++++-------------- doc/source/ref-changelog.md | 2 ++ 2 files changed, 16 insertions(+), 14 deletions(-) diff --git a/baselines/fedmlb/README.md b/baselines/fedmlb/README.md index f2816637221b..47cc69b48e09 100644 --- a/baselines/fedmlb/README.md +++ b/baselines/fedmlb/README.md @@ -2,18 +2,18 @@ title: Multi-Level Branched Regularization for Federated Learning url: https://proceedings.mlr.press/v162/kim22a.html labels: [data heterogeneity, knowledge distillation, image classification] -dataset: [cifar100, tiny-imagenet] +dataset: [CIFAR-100, Tiny-ImageNet] --- -# *_FedMLB_* +# FedMLB: Multi-Level Branched Regularization for Federated Learning > Note: If you use this baseline in your work, please remember to cite the original authors of the paper as well as the Flower paper. -****Paper:**** [proceedings.mlr.press/v162/kim22a.html](https://proceedings.mlr.press/v162/kim22a.html) +**Paper:** [proceedings.mlr.press/v162/kim22a.html](https://proceedings.mlr.press/v162/kim22a.html) -****Authors:**** Jinkyu Kim, Geeho Kim, Bohyung Han +**Authors:** Jinkyu Kim, Geeho Kim, Bohyung Han -****Abstract:**** *_A critical challenge of federated learning is data +**Abstract:** *_A critical challenge of federated learning is data heterogeneity and imbalance across clients, which leads to inconsistency between local networks and unstable convergence of global models. To alleviate @@ -37,7 +37,7 @@ The source code is available in our project page._* ## About this baseline -****What’s implemented:**** The code in this directory reproduces the results for FedMLB, FedAvg, and FedAvg+KD. +**What’s implemented:** The code in this directory reproduces the results for FedMLB, FedAvg, and FedAvg+KD. The reproduced results use the CIFAR-100 dataset or the TinyImagenet dataset. Four settings are available for both the datasets, 1. Moderate-scale with Dir(0.3), 100 clients, 5% participation, balanced dataset. @@ -45,32 +45,32 @@ the datasets, 3. Moderate-scale with Dir(0.6), 100 clients, 5% participation rate, balanced dataset. 4. Large-scale experiments with Dir(0.6), 500 clients, 2% participation rate, balanced dataset. -****Datasets:**** CIFAR-100, Tiny-ImageNet. +**Datasets:** CIFAR-100, Tiny-ImageNet. -****Hardware Setup:**** The code in this repository has been tested on a Linux machine with 64GB RAM. +**Hardware Setup:** The code in this repository has been tested on a Linux machine with 64GB RAM. Be aware that in the default config the memory usage can exceed 10GB. -****Contributors:**** Alessio Mora (University of Bologna, PhD, alessio.mora@unibo.it). +**Contributors:** Alessio Mora (University of Bologna, PhD, alessio.mora@unibo.it). ## Experimental Setup -****Task:**** Image classification +**Task:** Image classification -****Model:**** ResNet-18. +**Model:** ResNet-18. -****Dataset:**** Four settings are available for CIFAR-100, +**Dataset:** Four settings are available for CIFAR-100, 1. Moderate-scale with Dir(0.3), 100 clients, 5% participation, balanced dataset (500 examples per client). 2. Large-scale experiments with Dir(0.3), 500 clients, 2% participation rate, balanced dataset (100 examples per client). 3. Moderate-scale with Dir(0.6), 100 clients, 5% participation rate, balanced dataset (500 examples per client). 4. Large-scale experiments with Dir(0.6), 500 clients, 2% participation rate, balanced dataset (100 examples per client). -****Dataset:**** Four settings are available for Tiny-Imagenet, +**Dataset:** Four settings are available for Tiny-Imagenet, 1. Moderate-scale with Dir(0.3), 100 clients, 5% participation, balanced dataset (1000 examples per client). 2. Large-scale experiments with Dir(0.3), 500 clients, 2% participation rate, balanced dataset (200 examples per client). 3. Moderate-scale with Dir(0.6), 100 clients, 5% participation rate, balanced dataset (1000 examples per client). 4. Large-scale experiments with Dir(0.6), 500 clients, 2% participation rate, balanced dataset (200 examples per client). -****Training Hyperparameters:**** +**Training Hyperparameters:** | Hyperparameter | Description | Default Value | | ------------- | ------------- | ------------- | diff --git a/doc/source/ref-changelog.md b/doc/source/ref-changelog.md index b2b339924f28..d0a29336acf1 100644 --- a/doc/source/ref-changelog.md +++ b/doc/source/ref-changelog.md @@ -22,6 +22,8 @@ - Baselines Docs ([#2290](https://github.com/adap/flower/pull/2290), [#2400](https://github.com/adap/flower/pull/2400)) + - FedMLB ([#2340](https://github.com/adap/flower/pull/2340), [#2507](https://github.com/adap/flower/pull/2507)) + - TAMUNA ([#2254](https://github.com/adap/flower/pull/2254), [#2508](https://github.com/adap/flower/pull/2508)) - **Update Flower Examples** ([#2384](https://github.com/adap/flower/pull/2384)), ([#2425](https://github.com/adap/flower/pull/2425))