Skip to content

Commit

Permalink
add anchors for olds links
Browse files Browse the repository at this point in the history
  • Loading branch information
jameslamb committed Feb 19, 2021
1 parent e76c0a3 commit 69bbcbb
Show file tree
Hide file tree
Showing 3 changed files with 10 additions and 0 deletions.
2 changes: 2 additions & 0 deletions docs/Advanced-Topics.rst
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,8 @@ Parameters Tuning

- Refer to `Parameters Tuning <./Parameters-Tuning.rst>`__.

.. _Parallel Learning:

Distributed Learning
--------------------

Expand Down
2 changes: 2 additions & 0 deletions docs/Features.rst
Original file line number Diff line number Diff line change
Expand Up @@ -72,6 +72,8 @@ It only needs to use some collective communication algorithms, like "All reduce"
LightGBM implements state-of-art algorithms\ `[9] <#references>`__.
These collective communication algorithms can provide much better performance than point-to-point communication.

.. _Optimization in Parallel Learning:

Optimization in Distributed Learning
------------------------------------

Expand Down
6 changes: 6 additions & 0 deletions docs/Parallel-Learning-Guide.rst
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
Distributed Learning Guide
==========================

.. _Parallel Learning Guid:

This guide describes distributed learning in LightGBM. Distributed learning allows the use of multiple machines to produce a single model.

Follow the `Quick Start <./Quick-Start.rst>`__ to know how to use LightGBM first.
Expand Down Expand Up @@ -65,6 +67,8 @@ Kubeflow users can also use the `Kubeflow XGBoost Operator`_ for machine learnin

Kubeflow integrations for LightGBM are not maintained by LightGBM's maintainers.

.. _Build Parallel Version:

LightGBM CLI
^^^^^^^^^^^^

Expand Down Expand Up @@ -99,6 +103,8 @@ Then write these IP in one file (assume ``mlist.txt``) like following:
**Note**: For Windows users, need to start "smpd" to start MPI service. More details can be found `here`_.

.. _Run Parallel Learning:

Run Distributed Learning
''''''''''''''''''''''''

Expand Down

0 comments on commit 69bbcbb

Please sign in to comment.