This repository contains the source code for L-DAWA: Layer-wise Divergence Aware Weight Aggregation in Federated Self-Supervised Visual Representation Learning that has been accepted in ICCV-2023.
Paper, Supplementary materials
- Yasar Abbas Ur Rehman , Yan Gao, Pedro Porto Buarque de Gusmão, Mina Alibegi, Jiajun Shen, and Nicholas D. Lane
- Anaconda environment with python 3.8
- Pytorch Lightning : 1.4.19
- Flower: 0.17.0
- For a complete list of the required packages please see the requirement.txt file. One can easily install, all the requirements by running
pip install -r requirement.txt
.
- Federated Learning with PyTorch Lightning and Flower -- > https://github.com/adap/flower/tree/main/examples/quickstart-pytorch-lightning
- Building a custom strategy from scratch with Flower ---> https://flower.dev/docs/framework/tutorial-series-build-a-strategy-from-scratch-pytorch.html
- Go to the folder
FL_Pretraining
and run thepretrain_FL.py
script. - To execute the finetuning run the
finetune_script.py
script.
- CIFAR-10, CIFAR-100, Tiny-ImageNet
-
Although our main motivation for designing L-DAWA was to tackle the client bias in Cross-Silo FL scenarios, in which each client runs a Self Supervised Learning (SSL) algorithm. L-DAWA can equally work well in Cross-Silo FL scenarios with Supervised Learning (SL) algorithms as shown below (See. Supplementary materials for the details.)
Aggregation Type E1 E5 E10 FedAvg 77.91 83.76 81.31 FedYogi 77.49 72.50 74.85 FedProx 80.55 74.87 72.24 L-DAWA 81.96 84.68 82.35
The results below are obtained by pertaining SimCLR in Cross-Silo FL settings first followed by linear-finetuning.
Method | Architecture | E1 | E5 | E10 |
---|---|---|---|---|
FedAvg | ResNet34 | 54.76 | 69.81 | 74.21 |
FedU | ResNet34 | 52.85 | 67.84 | 72.21 |
L-DAWA | ResNet34 | 61.92 | 73.33 | 77.30 |
FedAvg | ResNet50 | 63.62 | 75.39 | 79.41 |
FedU | ResNet50 | 57.61 | 71.44 | 76.85 |
L-DAWA | ResNet50 | 63.90 | 75.58 | 79.11 |
Method | Architecture | E1 |
---|---|---|
FedAvg | ResNet34 | 11.93 |
FedU | ResNet34 | 11.75 |
L-DAWA | ResNet34 | 18.64 |
FedAvg | ResNet50 | 13.51 |
FedU | ResNet50 | 13.22 |
L-DAWA | ResNet50 | 19.04 |
- Added the source code for L-DAWA
- Added the inventory folder
- Added the FL_pretraining script
- The build-up of this repository is in progress
If you encounter any issues, feel free to open an issue in the GitHub.
@inproceedings{rehman2023dawa,
title={L-DAWA: Layer-wise Divergence Aware Weight Aggregation in Federated Self-Supervised Visual Representation Learning},
author={Rehman, Yasar Abbas Ur and Gao, Yan and de Gusmao, Pedro Porto Buarque and Alibeigi, Mina and Shen, Jiajun and Lane, Nicholas D},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={16464--16473},
year={2023}
}