Skip to content

Commit

Permalink
Update index.rst
Browse files Browse the repository at this point in the history
  • Loading branch information
jan-janssen authored Nov 10, 2023
1 parent 771a863 commit bee8c91
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ uses a different backend, with the :code:`pympipool.flux.PyFluxExecutor` being t

* :code:`pympipool.mpi.PyMpiExecutor`: The simplest executor of the three uses `mpi4py <https://mpi4py.readthedocs.io>`_ as a backend. This simplifies the installation on all operation systems including Windows. Still at the same time it limits the up-scaling to a single compute node and serial or MPI parallel python functions. There is no support for thread based parallelism or GPU assignment. This interface is primarily used for testing and developing or as a fall-back solution. It is not recommended to use this interface in production.
* :code:`pympipool.slurm.PySlurmExecutor`: The `SLURM workload manager <https://www.schedmd.com>`_ is commonly used on HPC systems to schedule and distribute tasks. :code:`pympipool` provides a python interface for scheduling the execution of python functions as SLURM job steps which are typically created using the :code:`srun` command. This executor supports serial python functions, thread based parallelism, MPI based parallelism and the assignment of GPUs to individual python functions. When the `SLURM workload manager <https://www.schedmd.com>`_ is installed on your HPC cluster this interface can be a reasonable choice, still depending on the SLURM configuration in can be limited in terms of the fine-grained scheduling or the responsiveness when working with hundreds of compute nodes in an individual allocation.
* :code:`pympipool.flux.PyFluxExecutor`: The `flux framework <https://flux-framework.org>`_ is the preferred backend for :code:`pympipool`. Just like the :code:`pympipool.slurm.PySlurmExecutor` it supports serial python functions, thread based parallelism, MPI based parallelism and the assignment of GPUs to individual python functions. Still the advantages of using the `flux <https://flux-framework.org>`_ as a backend are the easy installation, the faster allocation of resources as the resources are managed within the allocation and no central databases is used and the superior level of fine-grained resource assignment which is typically not available on HPC resource schedulers.
* :code:`pympipool.flux.PyFluxExecutor`: The `flux framework <https://flux-framework.org>`_ is the preferred backend for :code:`pympipool`. Just like the :code:`pympipool.slurm.PySlurmExecutor` it supports serial python functions, thread based parallelism, MPI based parallelism and the assignment of GPUs to individual python functions. Still the advantages of using the `flux framework <https://flux-framework.org>`_ as a backend are the easy installation, the faster allocation of resources as the resources are managed within the allocation and no central databases is used and the superior level of fine-grained resource assignment which is typically not available on HPC resource schedulers.

Each of these backends consists of two parts a broker and a worker. When a new tasks is submitted from the user it is
received by the broker and the broker identifies the first available worker. The worker then executes a task and returns
Expand Down

0 comments on commit bee8c91

Please sign in to comment.