From 69bbcbb1efbb364cb7c4152f365fc882418187d3 Mon Sep 17 00:00:00 2001 From: James Lamb Date: Thu, 18 Feb 2021 23:46:43 -0600 Subject: [PATCH] add anchors for olds links --- docs/Advanced-Topics.rst | 2 ++ docs/Features.rst | 2 ++ docs/Parallel-Learning-Guide.rst | 6 ++++++ 3 files changed, 10 insertions(+) diff --git a/docs/Advanced-Topics.rst b/docs/Advanced-Topics.rst index 680977eef81f..9137dc3123fc 100644 --- a/docs/Advanced-Topics.rst +++ b/docs/Advanced-Topics.rst @@ -59,6 +59,8 @@ Parameters Tuning - Refer to `Parameters Tuning <./Parameters-Tuning.rst>`__. +.. _Parallel Learning: + Distributed Learning -------------------- diff --git a/docs/Features.rst b/docs/Features.rst index 08b7bb2f20df..6566eb628af2 100644 --- a/docs/Features.rst +++ b/docs/Features.rst @@ -72,6 +72,8 @@ It only needs to use some collective communication algorithms, like "All reduce" LightGBM implements state-of-art algorithms\ `[9] <#references>`__. These collective communication algorithms can provide much better performance than point-to-point communication. +.. _Optimization in Parallel Learning: + Optimization in Distributed Learning ------------------------------------ diff --git a/docs/Parallel-Learning-Guide.rst b/docs/Parallel-Learning-Guide.rst index 6e3fcb60ef6c..acc42eee43e3 100644 --- a/docs/Parallel-Learning-Guide.rst +++ b/docs/Parallel-Learning-Guide.rst @@ -1,6 +1,8 @@ Distributed Learning Guide ========================== +.. _Parallel Learning Guid: + This guide describes distributed learning in LightGBM. Distributed learning allows the use of multiple machines to produce a single model. Follow the `Quick Start <./Quick-Start.rst>`__ to know how to use LightGBM first. @@ -65,6 +67,8 @@ Kubeflow users can also use the `Kubeflow XGBoost Operator`_ for machine learnin Kubeflow integrations for LightGBM are not maintained by LightGBM's maintainers. +.. _Build Parallel Version: + LightGBM CLI ^^^^^^^^^^^^ @@ -99,6 +103,8 @@ Then write these IP in one file (assume ``mlist.txt``) like following: **Note**: For Windows users, need to start "smpd" to start MPI service. More details can be found `here`_. +.. _Run Parallel Learning: + Run Distributed Learning ''''''''''''''''''''''''