Skip to content

Commit

Permalink
Merge branch 'main' into fedmeta
Browse files Browse the repository at this point in the history
  • Loading branch information
jafermarq authored Oct 16, 2023
2 parents c3fdd01 + 3ec5ee8 commit 5e984e2
Show file tree
Hide file tree
Showing 2 changed files with 16 additions and 14 deletions.
28 changes: 14 additions & 14 deletions baselines/fedmlb/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,18 +2,18 @@
title: Multi-Level Branched Regularization for Federated Learning
url: https://proceedings.mlr.press/v162/kim22a.html
labels: [data heterogeneity, knowledge distillation, image classification]
dataset: [cifar100, tiny-imagenet]
dataset: [CIFAR-100, Tiny-ImageNet]
---

# *_FedMLB_*
# FedMLB: Multi-Level Branched Regularization for Federated Learning

> Note: If you use this baseline in your work, please remember to cite the original authors of the paper as well as the Flower paper.
****Paper:**** [proceedings.mlr.press/v162/kim22a.html](https://proceedings.mlr.press/v162/kim22a.html)
**Paper:** [proceedings.mlr.press/v162/kim22a.html](https://proceedings.mlr.press/v162/kim22a.html)

****Authors:**** Jinkyu Kim, Geeho Kim, Bohyung Han
**Authors:** Jinkyu Kim, Geeho Kim, Bohyung Han

****Abstract:**** *_A critical challenge of federated learning is data
**Abstract:** *_A critical challenge of federated learning is data
heterogeneity and imbalance across clients, which
leads to inconsistency between local networks and
unstable convergence of global models. To alleviate
Expand All @@ -37,40 +37,40 @@ The source code is available in our project page._*

## About this baseline

****What’s implemented:**** The code in this directory reproduces the results for FedMLB, FedAvg, and FedAvg+KD.
**What’s implemented:** The code in this directory reproduces the results for FedMLB, FedAvg, and FedAvg+KD.
The reproduced results use the CIFAR-100 dataset or the TinyImagenet dataset. Four settings are available for both
the datasets,
1. Moderate-scale with Dir(0.3), 100 clients, 5% participation, balanced dataset.
2. Large-scale experiments with Dir(0.3), 500 clients, 2% participation rate, balanced dataset.
3. Moderate-scale with Dir(0.6), 100 clients, 5% participation rate, balanced dataset.
4. Large-scale experiments with Dir(0.6), 500 clients, 2% participation rate, balanced dataset.

****Datasets:**** CIFAR-100, Tiny-ImageNet.
**Datasets:** CIFAR-100, Tiny-ImageNet.

****Hardware Setup:**** The code in this repository has been tested on a Linux machine with 64GB RAM.
**Hardware Setup:** The code in this repository has been tested on a Linux machine with 64GB RAM.
Be aware that in the default config the memory usage can exceed 10GB.

****Contributors:**** Alessio Mora (University of Bologna, PhD, [email protected]).
**Contributors:** Alessio Mora (University of Bologna, PhD, [email protected]).

## Experimental Setup

****Task:**** Image classification
**Task:** Image classification

****Model:**** ResNet-18.
**Model:** ResNet-18.

****Dataset:**** Four settings are available for CIFAR-100,
**Dataset:** Four settings are available for CIFAR-100,
1. Moderate-scale with Dir(0.3), 100 clients, 5% participation, balanced dataset (500 examples per client).
2. Large-scale experiments with Dir(0.3), 500 clients, 2% participation rate, balanced dataset (100 examples per client).
3. Moderate-scale with Dir(0.6), 100 clients, 5% participation rate, balanced dataset (500 examples per client).
4. Large-scale experiments with Dir(0.6), 500 clients, 2% participation rate, balanced dataset (100 examples per client).

****Dataset:**** Four settings are available for Tiny-Imagenet,
**Dataset:** Four settings are available for Tiny-Imagenet,
1. Moderate-scale with Dir(0.3), 100 clients, 5% participation, balanced dataset (1000 examples per client).
2. Large-scale experiments with Dir(0.3), 500 clients, 2% participation rate, balanced dataset (200 examples per client).
3. Moderate-scale with Dir(0.6), 100 clients, 5% participation rate, balanced dataset (1000 examples per client).
4. Large-scale experiments with Dir(0.6), 500 clients, 2% participation rate, balanced dataset (200 examples per client).

****Training Hyperparameters:****
**Training Hyperparameters:**

| Hyperparameter | Description | Default Value |
| ------------- | ------------- | ------------- |
Expand Down
2 changes: 2 additions & 0 deletions doc/source/ref-changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,8 @@

- Baselines Docs ([#2290](https://github.com/adap/flower/pull/2290), [#2400](https://github.com/adap/flower/pull/2400))

- FedMLB ([#2340](https://github.com/adap/flower/pull/2340), [#2507](https://github.com/adap/flower/pull/2507))

- TAMUNA ([#2254](https://github.com/adap/flower/pull/2254), [#2508](https://github.com/adap/flower/pull/2508))

- FedMeta [#2438](https://github.com/adap/flower/pull/2438)
Expand Down

0 comments on commit 5e984e2

Please sign in to comment.