From 3e8e60e75f299314bb175ca405148523d4ba0eed Mon Sep 17 00:00:00 2001 From: Javier Date: Thu, 4 Jan 2024 22:43:44 +0000 Subject: [PATCH] Update README.md (#2771) --- baselines/hfedxgboost/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/baselines/hfedxgboost/README.md b/baselines/hfedxgboost/README.md index 29702496370b..2f31e2c4c584 100644 --- a/baselines/hfedxgboost/README.md +++ b/baselines/hfedxgboost/README.md @@ -11,7 +11,7 @@ dataset: [a9a, cod-rna, ijcnn1, space_ga, cpusmall, YearPredictionMSD] **Paper:** [arxiv.org/abs/2304.07537](https://arxiv.org/abs/2304.07537) -**Authors:** Chenyang Ma, Xinchi Qiu, Daniel J. Beutel, Nicholas D. Laneearly_stop_patience_rounds: 100 +**Authors:** Chenyang Ma, Xinchi Qiu, Daniel J. Beutel, Nicholas D. Lane **Abstract:** The privacy-sensitive nature of decentralized datasets and the robustness of eXtreme Gradient Boosting (XGBoost) on tabular data raise the need to train XGBoost in the context of federated learning (FL). Existing works on federated XGBoost in the horizontal setting rely on the sharing of gradients, which induce per-node level communication frequency and serious privacy concerns. To alleviate these problems, we develop an innovative framework for horizontal federated XGBoost which does not depend on the sharing of gradients and simultaneously boosts privacy and communication efficiency by making the learning rates of the aggregated tree ensembles are learnable. We conduct extensive evaluations on various classification and regression datasets, showing our approach achieve performance comparable to the state-of-the-art method and effectively improves communication efficiency by lowering both communication rounds and communication overhead by factors ranging from 25x to 700x.