Skip to content

zahidurtalukder/FairHeteroFL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

96 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Hardware-Sensitive Fairness in Heterogeneous Federated Learning

This repository is the official implementation of FairHetero-

📋 we propose a novel hardware-sensitive FL method called FairHeteroFL that promotes fairness among heterogeneous federated clients. Our approach offers tunable fairness within a group of devices with the same ML architecture as well as across different groups. Our evaluation under MNIST, FEMNIST, CIFAR10, and SHAKESPEARE datasets reveals that FairHetero can reduce variance among participating clients’ test loss compared to the existing state-of-the-art techniques, resulting in increased overall performances

Requirements

To install requirements:

pip install -r requirements.txt

📋 Getting Dataset Directly

Keep the dataset in the same directory as the dataset name.

Training

To train the model(s) in the paper, navigate to the directory and run this command:

MNIST IID train

python train mnist iid.py q qm1 qm2 qm3 qm4 qm5

MNIST Non-IID train

python train mnist noniid.py q qm1 qm2 qm3 qm4 qm5

MNIST Non-IID Extreme train

python train mnist noniid extreme.py q qm1 qm2 qm3 qm4 qm5

CIFAR10 IID train

python train cifar iid.py q qm1 qm2 qm3 qm4 qm5

CIFAR10 Non-IID train

python train cifar noniid.py q qm1 qm2 qm3 qm4 qm5

CIFAR10 Non-IID Extreme train

python train cifar noniid extreme.py q qm1 qm2 qm3 qm4 qm5

FEMNIST train

python train femnist.py q qm1 qm2 qm3 qm4 qm5

SHAKESPEARE train

python train shakespeare.py q qm1 qm2 qm3 qm4 qm5

📋 This trains the model with particular values of q and qms. After training the train and test losses and accuracies are automatically save in the data folder for future evaluation. You can tune q and qms values to get your desired model performance.

Evaluation

To evaluate the Groupwise performance, run the evaluate.py located in data for every dataset:

python evaluate.py "file_name"

📋 The file name should include the extension ".pkl". This produces the groupwise mean and variance of the test loss for a particular value of q and qms.

Pre-trained Models

You can download pre-trained models here:

  • MNIST trained on HeteroFL and FairHetero can be found here.
  • CIFAR10 trained on HeteroFL and FairHetero can be found here.
  • FEMNIST trained on HeteroFL and FairHetero can be found here.
  • SHAKESPEARE trained on HeteroFL and FairHetero can be found here.

📋 The pre-trained model was the model used to generate the main result in the paper. You can also generate the model using the parameters of q and qms in the paper.

Results

Our model achieves the following performance:

📋 This is the main result of our paper. This shows that with proper tuning of q and qms, we can get more balanced performance across clients from all the groups with different hardware capabilities.

Contributing

📋 Refer (LICENSE).

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages