-
Notifications
You must be signed in to change notification settings - Fork 906
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Example for Custom Metrics calculation during Federated Learning #1958
Conversation
Correcting validation dataset references on FlowerClient class and min_eval_clients to be in accordance with the comments.
To be compatible with sanity check for the return of fit method from clients (Tuple of list,int,dict) introduced on the last version of flower.
@danieljanes |
Co-authored-by: Daniel J. Beutel <[email protected]>
Co-authored-by: Daniel J. Beutel <[email protected]>
Hi @gubertoli, thanks for your contribution for this example! I left a few suggested changes and comments. |
Co-authored-by: Yan Gao <[email protected]>
Co-authored-by: Yan Gao <[email protected]>
Co-authored-by: Yan Gao <[email protected]>
Co-authored-by: Yan Gao <[email protected]>
Co-authored-by: Yan Gao <[email protected]>
Co-authored-by: Yan Gao <[email protected]>
Co-authored-by: Yan Gao <[email protected]>
Co-authored-by: Yan Gao <[email protected]>
Thank you so much, @yan-gao-GY, for the support! |
Thanks for the contribution @gubertoli! |
Issue
A common question on the flower's questions channel from Slack is related with retrieving other metrics in addition to those available on the ML frameworks.
Description
This PR aims to provide a clear example on how to retrieve custom and additional metrics, to serve as baseline for other people using the Flower for Federated Learning.
Related issues/PRs
N/A
Proposal
A new example on how to calculate custom metrics during Federated Learning.
Explanation
This simple example demonstrate how to calculate custom metrics over multiple clients beyond the traditional ones available in the ML frameworks. In this case, it demonstrate the use of ready-available scikit-learn metrics: accuracy, recall, precision, and f1-score.
Once both the test values (
y_test
) and the predictions (y_pred
) are available on the client side (client.py
), other metrics or custom ones are possible to be calculated.The main takeaways of this implementation are:
output_dict
on the client side - insideevaluate
method onclient.py
evaluate_metrics_aggregation_fn
- to aggregate the metrics on the server side, part of thestrategy
onserver.py
This example is based on the
quickstart_tensorflow
with CIFAR-10, source here.Checklist
#contributions
)Any other comments?
N/A