-
Notifications
You must be signed in to change notification settings - Fork 935
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docs(framework) Update how to aggregate evaluation results documentation #4529
Conversation
dstripelis
commented
Nov 18, 2024
- Add how to create a server function with context.
- Add how to create a server app and pass server function.
- Adding float casting on returned losses.
… part of the server app.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @dstripelis! Only two nits.
@@ -69,11 +69,24 @@ dictionaries: | |||
) | |||
|
|||
# Return aggregated loss and metrics (i.e., aggregated accuracy) | |||
return aggregated_loss, {"accuracy": aggregated_accuracy} | |||
return float(aggregated_loss), {"accuracy": aggregated_accuracy} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
return float(aggregated_loss), {"accuracy": aggregated_accuracy} | |
return float(aggregated_loss), {"accuracy": float(aggregated_accuracy)} |
Maybe keep it consistent?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed.
config = ServerConfig(num_rounds=num_rounds) | ||
|
||
# Define strategy | ||
strategy = AggregateCustomMetricStrategy() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
strategy = AggregateCustomMetricStrategy() | |
strategy = AggregateCustomMetricStrategy( | |
# (same arguments as FedAvg here) | |
) |
Let's retain the comments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed.
@dstripelis @chongshenng one thought: should we re-write the page and show how to use the evaluate_metrics_aggregation_fn callback instead? We aggregate metrics in such way in quite a few examples already: https://github.com/adap/flower/blob/main/examples/quickstart-pytorch/pytorchexample/server_app.py I'm not the biggest fan of callbacks but writing a custom strategy for this might be overkill. |
@jafermarq thank you for your feedback! In order to have some isolation between PRs, in the current PR I only improve the custom strategy class and address @chongshenng's comments. However, following your suggestions, I created a new PR where I introduce the evaluation callback function in the documentation: #4544, while keeping the custom class strategy. I think we can review and include this PR in the next release. |
@dstripelis @jafermarq I agree, let's focus on updating the docs for the 1.13 release, then in a subsequent PR we introduce the callback too. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!