-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
JobTypes and Types layered relationships. Could it be more flexible? #74
Comments
@victorsndvg it is not clear to me how mpi type could exist standalone, I mean: How it would be used with an SRUN/SBATCH job, without container? |
Taking as a base point that workload manager and software are mandatory. What I mean is not the MPI library, but the MPI process manager. One can use The orchestrator can take advantage of containers to provide portable workflows, but it can also use locally installed software from the HPC modules system. A sequential job does not need to spawn processes (don't need
A parallel MPI job needs to be called by a process manager (e.g.
In addition, other future important point is the support of other container technologies. Usage of |
how mpi flags woud be attached to a sbatch script? |
Really don't know if I understand the question. I figure out you mean in the blueprint example of the first post. This example:
Should be translated like this (e.g inside a sbatch script):
This particular example avoids the dependence of the MPI containers with the Instead of adding generic |
JobTypes can currently describe how the applications are called depending on the underlying (software) layers used for running them.
The involved layers we have identified are (based on our experience):
Slurm
)Openmpi
)Singularity
)echo hola
)Currently, workload managers types like
Slurm
orTorque
describe how a batch script is submitted. In addition, aSingularityJob
type describes how to call a singularity container inside workload manager. In particular, aSingularityJob
type always usempirun
as a process manager, but I think is not always mandatory (e.g. a sequential singularity job). Finally, a particular software call is also expressed through acommand
job option.In my opinion, the top (workload manager) and bottom ("software-call") layers of the hierarchy are mandatory. I think the other layers could be optional and interrelated in order to express (in a more flexible way) the requirements and execution mode of every application. As a suggestion, a
contained_in
relationship could express this kind of hierarchical structures.As a(n) (simplified) example, the following options could be acceptable:
srun command
srun [mpirun] command
srun [singularity] command
srun [mpirun] [singularity] command
This flexibility could be expressed by a blueprint like the following example:
Even if we though about other container solutions (docker), or single-node parallel jobs, we can interchange MPI+container-
contained_in
relationships, avoiding the MPI-vendor-and-version matching inside and outside the container in some cases:srun [singularity | docker] [mpirun] command
To think
The text was updated successfully, but these errors were encountered: