Skip to content

Commit

Permalink
Update README.md (#2771)
Browse files Browse the repository at this point in the history
  • Loading branch information
jafermarq authored Jan 4, 2024
1 parent 68e6cf8 commit 3e8e60e
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion baselines/hfedxgboost/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ dataset: [a9a, cod-rna, ijcnn1, space_ga, cpusmall, YearPredictionMSD]
**Paper:** [arxiv.org/abs/2304.07537](https://arxiv.org/abs/2304.07537)

**Authors:** Chenyang Ma, Xinchi Qiu, Daniel J. Beutel, Nicholas D. Laneearly_stop_patience_rounds: 100
**Authors:** Chenyang Ma, Xinchi Qiu, Daniel J. Beutel, Nicholas D. Lane

**Abstract:** The privacy-sensitive nature of decentralized datasets and the robustness of eXtreme Gradient Boosting (XGBoost) on tabular data raise the need to train XGBoost in the context of federated learning (FL). Existing works on federated XGBoost in the horizontal setting rely on the sharing of gradients, which induce per-node level communication frequency and serious privacy concerns. To alleviate these problems, we develop an innovative framework for horizontal federated XGBoost which does not depend on the sharing of gradients and simultaneously boosts privacy and communication efficiency by making the learning rates of the aggregated tree ensembles are learnable. We conduct extensive evaluations on various classification and regression datasets, showing our approach achieve performance comparable to the state-of-the-art method and effectively improves communication efficiency by lowering both communication rounds and communication overhead by factors ranging from 25x to 700x.

Expand Down

0 comments on commit 3e8e60e

Please sign in to comment.