@@ -1250,61 +1250,61 @@ Example
lockStop: --lock-stop $VG_NAME
vgCreate: --shared --addtag lvmlockd_even $VG_NAME $DEVICE_LIST
vgRemove: $VG_NAME
+---
+apiVersion: nnf.cray.hpe.com/v1alpha1
+kind: NnfStorageProfile
+metadata:
+ name: lvmlockd_odd
+ namespace: systemstorage
+data:
+ xfsStorage:
+ capacityScalingFactor: "1.0"
+ lustreStorage:
+ capacityScalingFactor: "1.0"
+ gfs2Storage:
+ capacityScalingFactor: "1.0"
+ default: false
+ pinned: true
+ rawStorage:
+ capacityScalingFactor: "1.0"
+ commandlines:
+ pvCreate: $DEVICE
+ pvRemove: $DEVICE
+ sharedVg: true
+ vgChange:
+ lockStart: --lock-start $VG_NAME
+ lockStop: --lock-stop $VG_NAME
+ vgCreate: --shared --addtag lvmlockd_odd $VG_NAME $DEVICE_LIST
+ vgRemove: $VG_NAME
+Note that the NnfStorageProfile
resources are marked as default: false
and pinned: true
. This is required for NnfStorageProfiles
that are used for system storage. The commandLine
fields for LV commands are left empty so that no LV is created.
apiVersion: nnf.cray.hpe.com/v1alpha1
-kind: NnfStorageProfile
+kind: NnfSystemStorage
metadata:
- name: lvmlockd_odd
+ name: lvmlockd_even
namespace: systemstorage
-data:
- xfsStorage:
- capacityScalingFactor: "1.0"
- lustreStorage:
- capacityScalingFactor: "1.0"
- gfs2Storage:
- capacityScalingFactor: "1.0"
- default: false
- pinned: true
- rawStorage:
- capacityScalingFactor: "1.0"
- commandlines:
- pvCreate: $DEVICE
- pvRemove: $DEVICE
- sharedVg: true
- vgChange:
- lockStart: --lock-start $VG_NAME
- lockStop: --lock-stop $VG_NAME
- vgCreate: --shared --addtag lvmlockd_odd $VG_NAME $DEVICE_LIST
- vgRemove: $VG_NAME
-
-Note that the NnfStorageProfile
resources are marked as default: false
and pinned: true
. This is required for NnfStorageProfiles
that are used for system storage. The commandLine
fields for LV commands are left empty so that no LV is created.
-apiVersion: nnf.cray.hpe.com/v1alpha1
-kind: NnfSystemStorage
-metadata:
- name: lvmlockd_even
- namespace: systemstorage
-spec:
- type: "raw"
- computesTarget: "even"
- makeClientMounts: false
- storageProfile:
- name: lvmlockd_even
- namespace: systemstorage
- kind: NnfStorageProfile
-
-apiVersion: nnf.cray.hpe.com/v1alpha1
-kind: NnfSystemStorage
-metadata:
- name: lvmlockd_odd
- namespace: systemstorage
-spec:
- type: "raw"
- computesTarget: "odd"
- makeClientMounts: false
- storageProfile:
- name: lvmlockd_odd
- namespace: systemstorage
- kind: NnfStorageProfile
+spec:
+ type: "raw"
+ computesTarget: "even"
+ makeClientMounts: false
+ storageProfile:
+ name: lvmlockd_even
+ namespace: systemstorage
+ kind: NnfStorageProfile
+---
+apiVersion: nnf.cray.hpe.com/v1alpha1
+kind: NnfSystemStorage
+metadata:
+ name: lvmlockd_odd
+ namespace: systemstorage
+spec:
+ type: "raw"
+ computesTarget: "odd"
+ makeClientMounts: false
+ storageProfile:
+ name: lvmlockd_odd
+ namespace: systemstorage
+ kind: NnfStorageProfile
The two NnfSystemStorage
resources each target all of the Rabbits but a different set of compute nodes. This will result in each Rabbit having two VGs and each compute node having one VG.
After the NnfSystemStorage
resources are created, the Rabbit software will create the storage on the Rabbit nodes and make the LVM VG available to the correct compute nodes. At this point, the status.ready
field will be true
. If an error occurs, the .status.error
field will describe the error.
diff --git a/dev/search/search_index.json b/dev/search/search_index.json
index 480f020..a193d0d 100644
--- a/dev/search/search_index.json
+++ b/dev/search/search_index.json
@@ -1 +1 @@
-{"config":{"lang":["en"],"separator":"[\\s\\-,:!=\\[\\]()\"/]+|(?!\\b)(?=[A-Z][a-z])|\\.(?!\\d)|&[lg]t;","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Near Node Flash","text":"Near Node Flash, also known as Rabbit, provides a disaggregated chassis-local storage solution which utilizes SR-IOV over a PCIe Gen 4.0 switching fabric to provide a set of compute blades with NVMe storage. It also provides a dedicated storage processor to offload tasks such as storage preparation and data movement from the compute nodes.
Here you will find NNF User Guides, Examples, and Request For Comment (RFC) documents.
"},{"location":"guides/","title":"User Guides","text":""},{"location":"guides/#setup","title":"Setup","text":" - Initial Setup
- Compute Daemons
- Firmware Upgrade
- High Availability Cluster
- RBAC for Users
"},{"location":"guides/#provisioning","title":"Provisioning","text":" - Storage Profiles
- Data Movement Configuration
- Copy Offload API
- Lustre External MGT
- Global Lustre
- Directive Breakdown
- User Interactions
- System Storage
"},{"location":"guides/#nnf-user-containers","title":"NNF User Containers","text":""},{"location":"guides/#node-management","title":"Node Management","text":" - Disable or Drain a Node
- Debugging NVMe Namespaces
"},{"location":"guides/compute-daemons/readme/","title":"Compute Daemons","text":"Rabbit software requires two daemons be installed and run on each compute node. Each daemon shares similar build, package, and installation processes described below.
- The Client Mount daemon,
clientmount
, provides the support for mounting Rabbit hosted file systems on compute nodes. - The Data Movement daemon,
nnf-dm
, supports creating, monitoring, and managing data movement (copy-offload) operations
"},{"location":"guides/compute-daemons/readme/#building-from-source","title":"Building from source","text":"Each daemon can be built in their respective repositories using the build-daemon
make target. Go version >= 1.19 must be installed to perform a local build.
"},{"location":"guides/compute-daemons/readme/#rpm-package","title":"RPM Package","text":"Each daemon is packaged as part of the build process in GitHub. Source and Binary RPMs are available.
"},{"location":"guides/compute-daemons/readme/#installation","title":"Installation","text":"For manual install, place the binary in the /usr/bin/
directory.
To install the application as a daemon service, run /usr/bin/[BINARY-NAME] install
"},{"location":"guides/compute-daemons/readme/#authentication","title":"Authentication","text":"NNF software defines a Kubernetes Service Account for granting communication privileges between the daemon and the kubeapi server. The token file and certificate file can be obtained by providing the necessary Service Account and Namespace to the below shell script.
Compute Daemon Service Account Namespace Client Mount nnf-clientmount nnf-system Data Movement nnf-dm-daemon nnf-dm-system #!/bin/bash\n\nSERVICE_ACCOUNT=$1\nNAMESPACE=$2\n\nkubectl get secret ${SERVICE_ACCOUNT} -n ${NAMESPACE} -o json | jq -Mr '.data.token' | base64 --decode > ./service.token\nkubectl get secret ${SERVICE_ACCOUNT} -n ${NAMESPACE} -o json | jq -Mr '.data[\"ca.crt\"]' | base64 --decode > ./service.cert\n
The service.token
and service.cert
files must be copied to each compute node, typically in the /etc/[BINARY-NAME]/
directory
"},{"location":"guides/compute-daemons/readme/#configuration","title":"Configuration","text":"Installing the daemon will create a default configuration located at /etc/systemd/system/[BINARY-NAME].service
The command line arguments can be provided to the service definition or as an override file.
Argument Definition --kubernetes-service-host=[ADDRESS]
The IP address or DNS entry of the kubeapi server --kubernetes-service-port=[PORT]
The listening port of the kubeapi server --service-token-file=[PATH]
Location of the service token file --service-cert-file=[PATH]
Location of the service certificate file --node-name=[COMPUTE-NODE-NAME]
Name of this compute node as described in the System Configuration. Defaults to the host name reported by the OS. --nnf-node-name=[RABBIT-NODE-NAME]
nnf-dm
daemon only. Name of the rabbit node connected to this compute node as described in the System Configuration. If not provided, the --node-name
value is used to find the associated Rabbit node in the System Configuration. --sys-config=[NAME]
nnf-dm
daemon only. The System Configuration resource's name. Defaults to default
An example unit file for nnf-dm:
cat /etc/systemd/system/nnf-dm.service[Unit]\nDescription=Near-Node Flash (NNF) Data Movement Service\n\n[Service]\nPIDFile=/var/run/nnf-dm.pid\nExecStartPre=/bin/rm -f /var/run/nnf-dm.pid\nExecStart=/usr/bin/nnf-dm \\\n --kubernetes-service-host=127.0.0.1 \\\n --kubernetes-service-port=7777 \\\n --service-token-file=/path/to/service.token \\\n --service-cert-file=/path/to/service.cert \\\n --kubernetes-qps=50 \\\n --kubernetes-burst=100\nRestart=on-failure\n\n[Install]\nWantedBy=multi-user.target\n
An example unit file is for clientmountd:
cat /etc/systemd/system/clientmountd.service[Unit]\nDescription=Near-Node Flash (NNF) Clientmountd Service\n\n[Service]\nPIDFile=/var/run/clientmountd.pid\nExecStartPre=/bin/rm -f /var/run/clientmountd.pid\nExecStart=/usr/bin/clientmountd \\\n --kubernetes-service-host=127.0.0.1 \\\n --kubernetes-service-port=7777 \\\n --service-token-file=/path/to/service.token \\\n --service-cert-file=/path/to/service.cert\nRestart=on-failure\nEnvironment=GOGC=off\nEnvironment=GOMEMLIMIT=20MiB\nEnvironment=GOMAXPROCS=5\nEnvironment=HTTP2_PING_TIMEOUT_SECONDS=60\n\n[Install]\nWantedBy=multi-user.target\n
"},{"location":"guides/compute-daemons/readme/#nnf-dm-specific-configuration","title":"nnf-dm Specific Configuration","text":"nnf-dm has some additional configuration options that can be used to tweak the kubernetes client:
Argument Definition --kubernetes-qps=[QPS]
The number of Queries Per Second (QPS) before client-side rate-limiting starts. Defaults to 50. --kubernetes-burst=[QPS]
Once QPS is hit, allow this many concurrent calls. Defaults to 100."},{"location":"guides/compute-daemons/readme/#easy-deployment","title":"Easy Deployment","text":"The nnf-deploy tool's install
command can be used to run the daemons on a system's set of compute nodes. This option will compile the latest daemon binaries, retrieve the service token and certificates, and will copy and install the daemons on each of the compute nodes. Refer to the nnf-deploy repository and run nnf-deploy install --help
for details.
"},{"location":"guides/data-movement/readme/","title":"Data Movement Configuration","text":"Data Movement can be configured in multiple ways:
- Server side
- Per Copy Offload API Request arguments
The first method is a \"global\" configuration - it affects all data movement operations. The second is done per the Copy Offload API, which allows for some configuration on a per-case basis, but is limited in scope. Both methods are meant to work in tandem.
"},{"location":"guides/data-movement/readme/#server-side-configmap","title":"Server Side ConfigMap","text":"The server side configuration is done via the nnf-dm-config
config map:
kubectl -n nnf-dm-system get configmap nnf-dm-config\n
The config map allows you to configure the following:
Setting Description slots The number of slots specified in the MPI hostfile. A value less than 1 disables the use of slots in the hostfile. maxSlots The number of max_slots specified in the MPI hostfile. A value less than 1 disables the use of max_slots in the hostfile. command The full command to execute data movement. More detail in the following section. progressIntervalSeconds interval to collect the progress data from the dcp
command."},{"location":"guides/data-movement/readme/#command","title":"command
","text":"The full data movement command
can be set here. By default, Data Movement uses mpirun
to run dcp
to perform the data movement. Changing the command
is useful for tweaking mpirun
or dcp
options or to replace the command with something that can aid in debugging (e.g. hostname
).
mpirun
uses hostfiles to list the hosts to launch dcp
on. This hostfile is created for each Data Movement operation, and it uses the config map to set the slots
and maxSlots
for each host (i.e. NNF node) in the hostfile. The number of slots
/maxSlots
is the same for every host in the hostfile.
Additionally, Data Movement uses substitution to fill in dynamic information for each Data Movement operation. Each of these must be present in the command for Data Movement to work properly when using mpirun
and dcp
:
VAR Description $HOSTFILE
hostfile that is created and used for mpirun. $UID
User ID that is inherited from the Workflow. $GID
Group ID that is inherited from the Workflow. $SRC
source for the data movement. $DEST
destination for the data movement. By default, the command will look something like the following. Please see the config map itself for the most up to date default command:
mpirun --allow-run-as-root --hostfile $HOSTFILE dcp --progress 1 --uid $UID --gid $GID $SRC $DEST\n
"},{"location":"guides/data-movement/readme/#profiles","title":"Profiles","text":"Profiles can be specified in the in the nnf-dm-config
config map. Users are able to select a profile using #DW directives (e.g .copy_in profile=my-dm-profile
) and the Copy Offload API. If no profile is specified, the default
profile is used. This default profile must exist in the config map.
slots
, maxSlots
, and command
can be stored in Data Movement profiles. These profiles are available to quickly switch between different settings for a particular workflow.
Example profiles:
profiles:\n default:\n slots: 8\n maxSlots: 0\n command: mpirun --allow-run-as-root --hostfile $HOSTFILE dcp --progress 1 --uid $UID --gid $GID $SRC $DEST\n no-xattrs:\n slots: 8\n maxSlots: 0\n command: mpirun --allow-run-as-root --hostfile $HOSTFILE dcp --progress 1 --xattrs none --uid $UID --gid $GID $SRC $DEST\n
"},{"location":"guides/data-movement/readme/#copy-offload-api-daemon","title":"Copy Offload API Daemon","text":"The CreateRequest
API call that is used to create Data Movement with the Copy Offload API has some options to allow a user to specify some options for that particular Data Movement. These settings are on a per-request basis.
The Copy Offload API requires the nnf-dm
daemon to be running on the compute node. This daemon may be configured to run full-time, or it may be left in a disabled state if the WLM is expected to run it only when a user requests it. See Compute Daemons for the systemd service configuration of the daemon. See RequiredDaemons
in Directive Breakdown for a description of how the user may request the daemon, in the case where the WLM will run it only on demand.
If the WLM is running the nnf-dm
daemon only on demand, then the user can request that the daemon be running for their job by specifying requires=copy-offload
in their DW
directive. The following is an example:
#DW jobdw type=xfs capacity=1GB name=stg1 requires=copy-offload\n
See the DataMovementCreateRequest API definition for what can be configured.
"},{"location":"guides/data-movement/readme/#selinux-and-data-movement","title":"SELinux and Data Movement","text":"Careful consideration must be taken when enabling SELinux on compute nodes. Doing so will result in SELinux Extended File Attributes (xattrs) being placed on files created by applications running on the compute node, which may not be supported by the destination file system (e.g. Lustre).
Depending on the configuration of dcp
, there may be an attempt to copy these xattrs. You may need to disable this by using dcp --xattrs none
to avoid errors. For example, the command
in the nnf-dm-config
config map or dcpOptions
in the DataMovementCreateRequest API could be used to set this option.
See the dcp
documentation for more information.
"},{"location":"guides/directive-breakdown/readme/","title":"Directive Breakdown","text":""},{"location":"guides/directive-breakdown/readme/#background","title":"Background","text":"The #DW
directives in a job script are not intended to be interpreted by the workload manager. The workload manager passes the #DW
directives to the NNF software through the DWS workflow
resource, and the NNF software determines what resources are needed to satisfy the directives. The NNF software communicates this information back to the workload manager through the DWS DirectiveBreakdown
resource. This document describes how the WLM should interpret the information in the DirectiveBreakdown
.
"},{"location":"guides/directive-breakdown/readme/#directivebreakdown-overview","title":"DirectiveBreakdown Overview","text":"The DWS DirectiveBreakdown
contains all the information necessary to inform the WLM how to pick storage and compute nodes for a job. The DirectiveBreakdown
resource is created by the NNF software during the Proposal
phase of the DWS workflow. The spec
section of the DirectiveBreakdown
is filled in with the #DW
directive by the NNF software, and the status
section contains the information for the WLM. The WLM should wait until the status.ready
field is true before interpreting the rest of the status
fields.
The contents of the DirectiveBreakdown
will look different depending on the file system type and options specified by the user. The status
section contains enough information that the WLM may be able to figure out the underlying file system type requested by the user, but the WLM should not make any decisions based on the file system type. Instead, the WLM should make storage and compute allocation decisions based on the generic information provided in the DirectiveBreakdown
since the storage and compute allocations needed to satisfy a #DW
directive may differ based on options other than the file system type.
"},{"location":"guides/directive-breakdown/readme/#storage-nodes","title":"Storage Nodes","text":"The status.storage
section of the DirectiveBreakdown
describes how the storage allocations should be made and any constraints on the NNF nodes that can be picked. The status.storage
section will exist only for jobdw
and create_persistent
directives. An example of the status.storage
section is included below.
...\nspec:\n directive: '#DW jobdw capacity=1GiB type=xfs name=example'\n userID: 7900\nstatus:\n...\n ready: true\n storage:\n allocationSets:\n - allocationStrategy: AllocatePerCompute\n constraints:\n labels:\n - dataworkflowservices.github.io/storage=Rabbit\n label: xfs\n minimumCapacity: 1073741824\n lifetime: job\n reference:\n kind: Servers\n name: example-0\n namespace: default\n...\n
-
status.storage.allocationSets
is a list of storage allocation sets that are needed for the job. An allocation set is a group of individual storage allocations that all have the same parameters and requirements. Depending on the storage type specified by the user, there may be more than one allocation set. Allocation sets should be handled independently.
-
status.storage.allocationSets.allocationStrategy
specifies how the allocations should be made.
AllocatePerCompute
- One allocation is needed per compute node in the job. The size of an individual allocation is specified in status.storage.allocationSets.minimumCapacity
AllocateAcrossServers
- One or more allocations are needed with an aggregate capacity of status.storage.allocationSets.minimumCapacity
. This allocation strategy does not imply anything about how many allocations to make per NNF node or how many NNF nodes to use. The allocations on each NNF node should be the same size. AllocateSingleServer
- One allocation is needed with a capacity of status.storage.allocationSets.minimumCapacity
-
status.storage.allocationSets.constraints
is a set of requirements for which NNF nodes can be picked. More information about the different constraint types is provided in the Storage Constraints section below.
-
status.storage.allocationSets.label
is an opaque string that the WLM uses when creating the spec.allocationSets entry in the DWS Servers
resource.
-
status.storage.allocationSets.minimumCapacity
is the allocation capacity in bytes. The interpretation of this field depends on the value of status.storage.allocationSets.allocationStrategy
-
status.storage.lifetime
is used to specify how long the storage allocations will last.
job
- The allocation will last for the lifetime of the job persistent
- The allocation will last for longer than the lifetime of the job
-
status.storage.reference
is an object reference to a DWS Servers
resource where the WLM can specify allocations
"},{"location":"guides/directive-breakdown/readme/#storage-constraints","title":"Storage Constraints","text":"Constraints on an allocation set provide additional requirements for how the storage allocations should be made on NNF nodes.
-
labels
specifies a list of labels that must all be on a DWS Storage
resource in order for an allocation to exist on that Storage
.
constraints:\n labels:\n - dataworkflowservices.github.io/storage=Rabbit\n - mysite.org/pool=firmware_test\n
apiVersion: dataworkflowservices.github.io/v1alpha2\nkind: Storage\nmetadata:\n labels:\n dataworkflowservices.github.io/storage: Rabbit\n mysite.org/pool: firmware_test\n mysite.org/drive-speed: fast\n name: rabbit-node-1\n namespace: default\n ...\n
-
colocation
specifies how two or more allocations influence the location of each other. The colocation constraint has two fields, type
and key
. Currently, the only value for type
is exclusive
. key
can be any value. This constraint means that the allocations from an allocation set with the colocation constraint can't be placed on an NNF node with another allocation whose allocation set has a colocation constraint with the same key. Allocations from allocation sets with colocation constraints with different keys or allocation sets without the colocation constraint are okay to put on the same NNF node.
constraints:\n colocation:\n type: exclusive\n key: lustre-mgt\n
-
count
this field specifies the number of allocations to make when status.storage.allocationSets.allocationStrategy
is AllocateAcrossServers
constraints:\n count: 5\n
-
scale
is a unitless value from 1-10 that is meant to guide the WLM on how many allocations to make when status.storage.allocationSets.allocationStrategy
is AllocateAcrossServers
. The actual number of allocations is not meant to correspond to the value of scale. Rather, 1 would indicate the minimum number of allocations to reach status.storage.allocationSets.minimumCapacity
, and 10 would be the maximum number of allocations that make sense given the status.storage.allocationSets.minimumCapacity
and the compute node count. The NNF software does not interpret this value, and it is up to the WLM to define its meaning.
constraints:\n scale: 8\n
"},{"location":"guides/directive-breakdown/readme/#compute-nodes","title":"Compute Nodes","text":"The status.compute
section of the DirectiveBreakdown
describes how the WLM should pick compute nodes for a job. The status.compute
section will exist only for jobdw
and persistentdw
directives. An example of the status.compute
section is included below.
...\nspec:\n directive: '#DW jobdw capacity=1TiB type=lustre name=example'\n userID: 3450\nstatus:\n...\n compute:\n constraints:\n location:\n - access:\n - priority: mandatory\n type: network\n - priority: bestEffort\n type: physical\n reference:\n fieldPath: servers.spec.allocationSets[0]\n kind: Servers\n name: example-0\n namespace: default\n - access:\n - priority: mandatory\n type: network\n reference:\n fieldPath: servers.spec.allocationSets[1]\n kind: Servers\n name: example-0\n namespace: default\n...\n
The status.compute.constraints
section lists any constraints on which compute nodes can be used. Currently the only constraint type is the location
constraint. status.compute.constraints.location
is a list of location constraints that all must be satisfied.
A location constraint consists of an access
list and a reference
.
status.compute.constraints.location.reference
is an object reference with a fieldPath
that points to an allocation set in the Servers
resource. If this is from a #DW jobdw
directive, the Servers
resource won't be filled in until the WLM picks storage nodes for the allocations. status.compute.constraints.location.access
is a list that specifies what type of access the compute nodes need to have to the storage allocations in the allocation set. An allocation set may have multiple access types that are required status.compute.constraints.location.access.type
specifies the connection type for the storage. This can be network
or physical
status.compute.constraints.location.access.priority
specifies how necessary the connection type is. This can be mandatory
or bestEffort
"},{"location":"guides/directive-breakdown/readme/#requireddaemons","title":"RequiredDaemons","text":"The status.requiredDaemons
section of the DirectiveBreakdown
tells the WLM about any driver-specific daemons it must enable for the job; it is assumed that the WLM knows about the driver-specific daemons and that if the users are specifying these then the WLM knows how to start them. The status.requiredDaemons
section will exist only for jobdw
and persistentdw
directives. An example of the status.requiredDaemons
section is included below.
status:\n...\n requiredDaemons:\n - copy-offload\n...\n
The allowed list of required daemons that may be specified is defined in the nnf-ruleset.yaml for DWS, found in the nnf-sos
repository. The ruleDefs.key[requires]
statement is specified in two places in the ruleset, one for jobdw
and the second for persistentdw
. The ruleset allows a list of patterns to be specified, allowing one for each of the allowed daemons.
The DW
directive will include a comma-separated list of daemons after the requires
keyword. The following is an example:
#DW jobdw type=xfs capacity=1GB name=stg1 requires=copy-offload\n
The DWDirectiveRule
resource currently active on the system can be viewed with:
kubectl get -n dws-system dwdirectiverule nnf -o yaml\n
"},{"location":"guides/directive-breakdown/readme/#valid-daemons","title":"Valid Daemons","text":"Each site should define the list of daemons that are valid for that site and recognized by that site's WLM. The initial nnf-ruleset.yaml
defines only one, called copy-offload
. When a user specifies copy-offload
in their DW
directive, they are stating that their compute-node application will use the Copy Offload API Daemon described in the Data Movement Configuration.
"},{"location":"guides/external-mgs/readme/","title":"Lustre External MGT","text":""},{"location":"guides/external-mgs/readme/#background","title":"Background","text":"Lustre has a limitation where only a single MGT can be mounted on a node at a time. In some situations it may be desirable to share an MGT between multiple Lustre file systems to increase the number of Lustre file systems that can be created and to decrease scheduling complexity. This guide provides instructions on how to configure NNF to share MGTs. There are three methods that can be used:
- Use a Lustre MGT from outside the NNF cluster
- Create a persistent Lustre file system through DWS and use the MGT it provides
- Create a pool of standalone persistent Lustre MGTs, and have the NNF software select one of them
These three methods are not mutually exclusive on the system as a whole. Individual file systems can use any of options 1-3 or create their own MGT.
"},{"location":"guides/external-mgs/readme/#configuration-with-an-external-mgt","title":"Configuration with an External MGT","text":""},{"location":"guides/external-mgs/readme/#storage-profile","title":"Storage Profile","text":"An existing MGT external to the NNF cluster can be used to manage the Lustre file systems on the NNF nodes. An advantage to this configuration is that the MGT can be highly available through multiple MGSs. A disadvantage is that there is only a single MGT. An MGT shared between more than a handful of Lustre file systems is not a common use case, so the Lustre code may prove less stable.
The following yaml provides an example of what the NnfStorageProfile
should contain to use an MGT on an external server.
apiVersion: nnf.cray.hpe.com/v1alpha1\nkind: NnfStorageProfile\nmetadata:\n name: external-mgt\n namespace: nnf-system\ndata:\n[...]\n lustreStorage:\n externalMgs: 1.2.3.4@eth0:1.2.3.5@eth0\n combinedMgtMdt: false\n standaloneMgtPoolName: \"\"\n[...]\n
"},{"location":"guides/external-mgs/readme/#nnflustremgt","title":"NnfLustreMGT","text":"A NnfLustreMGT
resource tracks which fsnames have been used on the MGT to prevent fsname re-use. Any Lustre file systems that are created through the NNF software will request an fsname to use from a NnfLustreMGT
resource. Every MGT must have a corresponding NnfLustreMGT
resource. For MGTs that are hosted on NNF hardware, the NnfLustreMGT
resources are created automatically. The NNF software also erases any unused fsnames from the MGT disk for any internally hosted MGTs.
For a MGT hosted on an external node, an admin must create an NnfLustreMGT
resource. This resource ensures that fsnames will be created in a sequential order without any fsname re-use. However, after an fsname is no longer in use by a file system, it will not be erased from the MGT disk. An admin may decide to periodically run the lctl erase_lcfg [fsname]
command to remove fsnames that are no longer in use.
Below is an example NnfLustreMGT
resource. The NnfLustreMGT
resource for external MGSs must be created in the nnf-system
namespace.
apiVersion: nnf.cray.hpe.com/v1alpha1\nkind: NnfLustreMGT\nmetadata:\n name: external-mgt\n namespace: nnf-system\nspec:\n addresses:\n - \"1.2.3.4@eth0:1.2.3.5@eth0\"\n fsNameStart: \"aaaaaaaa\"\n fsNameBlackList:\n - \"mylustre\"\n fsNameStartReference:\n name: external-mgt\n namespace: default\n kind: ConfigMap\n
addresses
- This is a list of LNet addresses that could be used for this MGT. This should match any values that are used in the externalMgs
field in the NnfStorageProfiles
. fsNameStart
- The first fsname to use. Subsequent fsnames will be incremented based on this starting fsname (e.g, aaaaaaaa
, aaaaaaab
, aaaaaaac
). fsnames use lowercase letters 'a'
-'z'
. fsNameStart
should be exactly 8 characters long. fsNameBlackList
- This is a list of fsnames that should not be given to any NNF Lustre file systems. If the MGT is hosting any non-NNF Lustre file systems, their fsnames should be included in this blacklist. fsNameStartReference
- This is an optional ObjectReference
to a ConfigMap
that holds a starting fsname. If this field is specified, it takes precedence over the fsNameStart
field in the spec. The ConfigMap
will be updated to the next available fsname every time an fsname is assigned to a new Lustre file system.
"},{"location":"guides/external-mgs/readme/#configmap","title":"ConfigMap","text":"For external MGTs, the fsNameStartReference
should be used to point to a ConfigMap
in the default
namespace. The ConfigMap
should be left empty initially. The ConfigMap
is used to hold the value of the next available fsname, and it should not be deleted or modified while a NnfLustreMGT
resource is referencing it. Removing the ConfigMap
will cause the Rabbit software to lose track of which fsnames have already been used on the MGT. This is undesireable unless the external MGT is no longer being used by Rabbit software or if an admin has erased all previously used fsnames with the lctl erase_lcfg [fsname]
command.
When using the ConfigMap
, the nnf-sos software may be undeployed and redeployed without losing track of the next fsname value. During an undeploy, the NnfLustreMGT
resource will be removed. During a deploy, the NnfLustreMGT
resource will read the fsname value from the ConfigMap
if it is present. The value in the ConfigMap
will override the fsname in the fsNameStart
field.
"},{"location":"guides/external-mgs/readme/#configuration-with-persistent-lustre","title":"Configuration with Persistent Lustre","text":"The MGT from a persistent Lustre file system hosted on the NNF nodes can also be used as the MGT for other NNF Lustre file systems. This configuration has the advantage of not relying on any hardware outside of the cluster. However, there is no high availability, and a single MGT is still shared between all Lustre file systems created on the cluster.
To configure a persistent Lustre file system that can share its MGT, a NnfStorageProfile
should be used that does not specify externalMgs
. The MGT can either share a volume with the MDT or not (combinedMgtMdt
).
apiVersion: nnf.cray.hpe.com/v1alpha1\nkind: NnfStorageProfile\nmetadata:\n name: persistent-lustre-shared-mgt\n namespace: nnf-system\ndata:\n[...]\n lustreStorage:\n externalMgs: \"\"\n combinedMgtMdt: false\n standaloneMgtPoolName: \"\"\n[...]\n
The persistent storage is created with the following DW directive:
#DW create_persistent name=shared-lustre capacity=100GiB type=lustre profile=persistent-lustre-shared-mgt\n
After the persistent Lustre file system is created, an admin can discover the MGS address by looking at the NnfStorage
resource with the same name as the persistent storage that was created (shared-lustre
in the above example).
apiVersion: nnf.cray.hpe.com/v1alpha1\nkind: NnfStorage\nmetadata:\n name: shared-lustre\n namespace: default\n[...]\nstatus:\n mgsNode: 5.6.7.8@eth1\n[...]\n
A separate NnfStorageProfile
can be created that specifies the MGS address.
apiVersion: nnf.cray.hpe.com/v1alpha1\nkind: NnfStorageProfile\nmetadata:\n name: internal-mgt\n namespace: nnf-system\ndata:\n[...]\n lustreStorage:\n externalMgs: 5.6.7.8@eth1\n combinedMgtMdt: false\n standaloneMgtPoolName: \"\"\n[...]\n
With this configuration, an admin must determine that no file systems are using the shared MGT before destroying the persistent Lustre instance.
"},{"location":"guides/external-mgs/readme/#configuration-with-an-internal-mgt-pool","title":"Configuration with an Internal MGT Pool","text":"Another method NNF supports is to create a number of persistent Lustre MGTs on NNF nodes. These MGTs are not part of a full file system, but are instead added to a pool of MGTs available for other Lustre file systems to use. Lustre file systems that are created will choose one of the MGTs at random to use and add a reference to make sure it isn't destroyed. This configuration has the advantage of spreading the Lustre management load across multiple servers. The disadvantage of this configuration is that it does not provide high availability.
To configure the system this way, the first step is to make a pool of Lustre MGTs. This is done by creating a persistent instance from a storage profile that specifies the standaloneMgtPoolName
option. This option tells NNF software to only create an MGT, and to add it to a named pool. The following NnfStorageProfile
provides an example where the MGT is added to the example-pool
pool:
apiVersion: nnf.cray.hpe.com/v1alpha1\nkind: NnfStorageProfile\nmetadata:\n name: mgt-pool-member\n namespace: nnf-system\ndata:\n[...]\n lustreStorage:\n externalMgs: \"\"\n combinedMgtMdt: false\n standaloneMgtPoolName: \"example-pool\"\n[...]\n
A persistent storage MGTs can be created with the following DW directive:
#DW create_persistent name=mgt-pool-member-1 capacity=1GiB type=lustre profile=mgt-pool-member\n
Multiple persistent instances with different names can be created using the mgt-pool-member
profile to add more than one MGT to the pool.
To create a Lustre file system that uses one of the MGTs from the pool, an NnfStorageProfile
should be created that uses the special notation pool:[pool-name]
in the externalMgs
field.
apiVersion: nnf.cray.hpe.com/v1alpha1\nkind: NnfStorageProfile\nmetadata:\n name: mgt-pool-consumer\n namespace: nnf-system\ndata:\n[...]\n lustreStorage:\n externalMgs: \"pool:example-pool\"\n combinedMgtMdt: false\n standaloneMgtPoolName: \"\"\n[...]\n
The following provides an example DW directive that uses an MGT from the MGT pool:
#DW jobdw name=example-lustre capacity=100GiB type=lustre profile=mgt-pool-consumer\n
MGT pools are named, so there can be separate pools with collections of different MGTs in them. A storage profile targeting each pool would be needed.
"},{"location":"guides/firmware-upgrade/readme/","title":"Firmware Upgrade Procedures","text":"This guide presents the firmware upgrade procedures to upgrade firmware from the Rabbit using tools present in the operating system.
"},{"location":"guides/firmware-upgrade/readme/#pcie-switch-firmware-upgrade","title":"PCIe Switch Firmware Upgrade","text":"In order to upgrade the firmware on the PCIe switch, the switchtec
kernel driver and utility of the same name must be installed. Rabbit hardware consists of two PCIe switches, which can be managed by devices typically located at /dev/switchtec0
and /dev/switchtec1
.
Danger
Upgrading the switch firmware will cause the switch to reset. Prototype Rabbit units not supporting hotplug should undergo a power-cycle to ensure switch initialization following firmware uprade. Similarily, compute nodes not supporting hotplug may lose connectivity after firmware upgrade and should also be power-cycled.
IMAGE=$1 # Provide the path to the firmware image file\nSWITCHES=(\"/dev/switchtec0\" \"/dev/switchtec1\")\nfor SWITCH in \"${SWITCHES[@]}\"; do switchtec fw-update \"$SWITCH\" \"$IMAGE\" --yes; done\n
"},{"location":"guides/firmware-upgrade/readme/#nvme-drive-firmware-upgrade","title":"NVMe Drive Firmware Upgrade","text":"In order to upgrade the firmware on NVMe drives attached to Rabbit, the switchtec
and switchtec-nvme
executables must be installed. All firmware downloads to drives are sent to the physical function of the drive which is accessible only using the switchtec-nvme
executable.
"},{"location":"guides/firmware-upgrade/readme/#batch-method","title":"Batch Method","text":""},{"location":"guides/firmware-upgrade/readme/#download-and-commit-new-firmware","title":"Download and Commit New Firmware","text":"The nvme.sh helper script applies the same command to each physical device fabric ID in the system. It provides a convenient way to upgrade the firmware on all drives in the system. Please see fw-download and fw-commit for details about the individual commands.
# Download firmware to all drives\n./nvme.sh cmd fw-download --fw=</path/to/nvme.fw>\n\n# Commit the new firmware\n# action=3: The image is requested to be activated immediately\n./nvme.sh cmd fw-commit --action=3\n
"},{"location":"guides/firmware-upgrade/readme/#rebind-the-pcie-connections","title":"Rebind the PCIe Connections","text":"In order to use the drives at this point, they must be unbound and bound to the PCIe fabric to reset device connections. The bind.sh helper script performs these two actions. Its use is illustrated below.
# Unbind all drives from the Rabbit to disconnect the PCIe connection to the drives\n./bind.sh unbind\n\n# Bind all drives to the Rabbit to reconnect the PCIe bus\n./bind.sh bind\n\n# At this point, your drives should be running the new firmware.\n# Verify the firmware...\n./nvme.sh cmd id-ctrl | grep -E \"^fr \"\n
"},{"location":"guides/firmware-upgrade/readme/#individual-drive-method","title":"Individual Drive Method","text":""},{"location":"guides/firmware-upgrade/readme/#determine-physical-device-fabric-id","title":"Determine Physical Device Fabric ID","text":"The first step is to determine a drive's unique Physical Device Fabric Identifier (PDFID). The following code fragment demonstrates one way to list the physcial device fabric ids of all the NVMe drives in the system.
#!/bin/bash\n\nSWITCHES=(\"/dev/switchtec0\" \"/dev/switchtec1\")\nfor SWITCH in \"${SWITCHES[@]}\";\ndo\n mapfile -t PDFIDS < <(sudo switchtec fabric gfms-dump \"${SWITCH}\" | grep \"Function 0 \" -A1 | grep PDFID | awk '{print $2}')\n for INDEX in \"${!PDFIDS[@]}\";\n do\n echo \"${PDFIDS[$INDEX]}@$SWITCH\"\n done\ndone\n
# Produces a list like this:\n0x1300@/dev/switchtec0\n0x1600@/dev/switchtec0\n0x1700@/dev/switchtec0\n0x1400@/dev/switchtec0\n0x1800@/dev/switchtec0\n0x1900@/dev/switchtec0\n0x1500@/dev/switchtec0\n0x1a00@/dev/switchtec0\n0x4100@/dev/switchtec1\n0x3c00@/dev/switchtec1\n0x4000@/dev/switchtec1\n0x3e00@/dev/switchtec1\n0x4200@/dev/switchtec1\n0x3b00@/dev/switchtec1\n0x3d00@/dev/switchtec1\n0x3f00@/dev/switchtec1\n
"},{"location":"guides/firmware-upgrade/readme/#download-firmware","title":"Download Firmware","text":"Using the physical device fabric identifier, the following commands update the firmware for specified drive.
# Download firmware to the drive\nsudo switchtec-nvme fw-download <PhysicalDeviceFabricID> --fw=</path/to/nvme.fw>\n\n# Activate the new firmware\n# action=3: The image is requested to be activated immediately without reset.\nsudo switchtec-nvme fw-commit --action=3\n
"},{"location":"guides/firmware-upgrade/readme/#rebind-pcie-connection","title":"Rebind PCIe Connection","text":"Once the firmware has been downloaded and committed, the PCIe connection from the Rabbit to the drive must be unbound and rebound. Please see bind.sh for details.
"},{"location":"guides/global-lustre/readme/","title":"Global Lustre","text":""},{"location":"guides/global-lustre/readme/#background","title":"Background","text":"Adding global lustre to rabbit systems allows access to external file systems. This is primarily used for Data Movement, where a user can perform copy_in
and copy_out
directives with global lustre being the source and destination, respectively.
Global lustre fileystems are represented by the lustrefilesystems
resource in Kubernetes:
$ kubectl get lustrefilesystems -A\nNAMESPACE NAME FSNAME MGSNIDS AGE\ndefault mylustre mylustre 10.1.1.113@tcp 20d\n
An example resource is as follows:
apiVersion: lus.cray.hpe.com/v1beta1\nkind: LustreFileSystem\nmetadata:\n name: mylustre\n namespace: default\nspec:\n mgsNids: 10.1.1.100@tcp\n mountRoot: /p/mylustre\n name: mylustre\n namespaces:\n default:\n modes:\n - ReadWriteMany\n
"},{"location":"guides/global-lustre/readme/#namespaces","title":"Namespaces","text":"Note the spec.namespaces
field. For each namespace listed, the lustre-fs-operator
creates a PV/PVC pair in that namespace. This allows pods in that namespace to access global lustre. The default
namespace should appear in this list. This makes the lustrefilesystem
resource available to the default
namespace, which makes it available to containers (e.g. container workflows) running in the default
namespace.
The nnf-dm-system
namespace is added automatically - no need to specify that manually here. The NNF Data Movement Manager is responsible for ensuring that the nnf-dm-system
is in spec.namespaces
. This is to ensure that the NNF DM Worker pods have global lustre mounted as long as nnf-dm
is deployed. To unmount global lustre from the NNF DM Worker pods, the lustrefilesystem
resource must be deleted.
The lustrefilesystem
resource itself should be created in the default
namespace (i.e. metadata.namespace
).
"},{"location":"guides/global-lustre/readme/#nnf-data-movement-manager","title":"NNF Data Movement Manager","text":"The NNF Data Movement Manager is responsible for monitoring lustrefilesystem
resources to mount (or umount) the global lustre filesystem in each of the NNF DM Worker pods. These pods run on each of the NNF nodes. This means with each addition or removal of lustrefilesystems
resources, the DM worker pods restart to adjust their mount points.
The NNF Data Movement Manager also places a finalizer on the lustrefilesystem
resource to indicate that the resource is in use by Data Movement. This is to prevent the PV/PVC being deleted while they are being used by pods.
"},{"location":"guides/global-lustre/readme/#adding-global-lustre","title":"Adding Global Lustre","text":"As mentioned previously, the NNF Data Movement Manager monitors these resources and automatically adds the nnf-dm-system
namespace to all lustrefilesystem
resources. Once this happens, a PV/PVC is created for the nnf-dm-system
namespace to access global lustre. The Manager updates the NNF DM Worker pods, which are then restarted to mount the global lustre file system.
"},{"location":"guides/global-lustre/readme/#removing-global-lustre","title":"Removing Global Lustre","text":"When a lustrefilesystem
is deleted, the NNF DM Manager takes notice and starts to unmount the file system from the DM Worker pods - causing another restart of the DM Worker pods. Once this is finished, the DM finalizer is removed from the lustrefilesystem
resource to signal that it is no longer in use by Data Movement.
If a lustrefilesystem
does not delete, check the finalizers to see what might still be using it. It is possible to get into a situation where nnf-dm
has been undeployed, so there is nothing to remove the DM finalizer from the lustrefilesystem
resource. If that is the case, then manually remove the DM finalizer so the deletion of the lustrefilesystem
resource can continue.
"},{"location":"guides/ha-cluster/notes/","title":"Notes","text":"pcs stonith create stonith-rabbit-node-1 fence_nnf pcmk_host_list=rabbit-node-1 kubernetes-service-host=10.30.107.247 kubernetes-service-port=6443 service-token-file=/etc/nnf/service.token service-cert-file=/etc/nnf/service.cert nnf-node-name=rabbit-node-1 verbose=1
pcs stonith create stonith-rabbit-compute-2 fence_redfish pcmk_host_list=\"rabbit-compute-2\" ip=10.30.105.237 port=80 systems-uri=/redfish/v1/Systems/1 username=root password=REDACTED ssl_insecure=true verbose=1
pcs stonith create stonith-rabbit-compute-3 fence_redfish pcmk_host_list=\"rabbit-compute-3\" ip=10.30.105.253 port=80 systems-uri=/redfish/v1/Systems/1 username=root password=REDACTED ssl_insecure=true verbose=1
"},{"location":"guides/ha-cluster/readme/","title":"High Availability Cluster","text":"NNF software supports provisioning of Red Hat GFS2 (Global File System 2) storage. Per RedHat:
GFS2 allows multiple nodes to share storage at a block level as if the storage were connected locally to each cluster node. GFS2 cluster file system requires a cluster infrastructure.
Therefore, in order to use GFS2, the NNF node and its associated compute nodes must form a high availability cluster.
"},{"location":"guides/ha-cluster/readme/#cluster-setup","title":"Cluster Setup","text":"Red Hat provides instructions for creating a high availability cluster with Pacemaker, including instructions for installing cluster software and creating a high availability cluster. When following these instructions, each of the high availability clusters that are created should be named after the hostname of the NNF node. In the Red Hat examples the cluster name is my_cluster
.
"},{"location":"guides/ha-cluster/readme/#fencing-agents","title":"Fencing Agents","text":"Fencing is the process of restricting and releasing access to resources that a failed cluster node may have access to. Since a failed node may be unresponsive, an external device must exist that can restrict access to shared resources of that node, or to issue a hard reboot of the node. More information can be found form Red Hat: 1.2.1 Fencing.
HPE hardware implements software known as the Hardware System Supervisor (HSS), which itself conforms to the SNIA Redfish/Swordfish standard. This provides the means to manage hardware outside the host OS.
"},{"location":"guides/ha-cluster/readme/#nnf-fencing","title":"NNF Fencing","text":""},{"location":"guides/ha-cluster/readme/#source","title":"Source","text":"The NNF Fencing agent is available at https://github.com/NearNodeFlash/fence-agents under the nnf
branch.
git clone https://github.com/NearNodeFlash/fence-agents --branch nnf\n
"},{"location":"guides/ha-cluster/readme/#build","title":"Build","text":"Refer to the NNF.md file
at the root directory of the fence-agents repository. The fencing agents must be installed on every node in the cluster.
"},{"location":"guides/ha-cluster/readme/#setup","title":"Setup","text":"Configure the NNF agent with the following parameters:
Argument Definition kubernetes-service-host=[ADDRESS]
The IP address of the kubeapi server kubernetes-service-port=[PORT]
The listening port of the kubeapi server service-token-file=[PATH]
The location of the service token file. The file must be present on all nodes within the cluster service-cert-file=[PATH]
The location of the service certificate file. The file must be present on all nodes within the cluster nnf-node-name=[NNF-NODE-NAME]
Name of the NNF node as it is appears in the System Configuration api-version=[VERSION]
The API Version of the NNF Node resource. Defaults to \"v1alpha1\" The token and certificate can be found in the Kubernetes Secrets resource for the nnf-system/nnf-fencing-agent ServiceAccount. This provides RBAC rules to limit the fencing agent to only the Kubernetes resources it needs access to.
For example, setting up the NNF fencing agent on rabbit-node-1
with a kubernetes service API running at 192.168.0.1:6443
and the service token and certificate copied to /etc/nnf/fence/
. This needs to be run on one node in the cluster.
pcs stonith create rabbit-node-1 fence_nnf pcmk_host_list=rabbit-node-1 kubernetes-service-host=192.168.0.1 kubernetes-service-port=6443 service-token-file=/etc/nnf/fence/service.token service-cert-file=/etc/nnf/fence/service.cert nnf-node-name=rabbit-node-1\n
"},{"location":"guides/ha-cluster/readme/#recovery","title":"Recovery","text":"Since the NNF node is connected to 16 compute blades, careful coordination around fencing of a NNF node is required to minimize the impact of the outage. When a Rabbit node is fenced, the corresponding DWS Storage resource (storages.dws.cray.hpe.com
) status changes. The workload manager must observe this change and follow the procedure below to recover from the fencing status.
- Observed the
storage.Status
changed and that storage.Status.RequiresReboot == True
- Set the
storage.Spec.State := Disabled
- Wait for a change to the Storage status
storage.Status.State == Disabled
- Reboot the NNF node
- Set the
storage.Spec.State := Enabled
- Wait for
storage.Status.State == Enabled
"},{"location":"guides/ha-cluster/readme/#compute-fencing","title":"Compute Fencing","text":"The Redfish fencing agent from ClusterLabs should be used for Compute nodes in the cluster. It is also included at https://github.com/NearNodeFlash/fence-agents, and can be built at the same time as the NNF fencing agent. Configure the agent with the following parameters:
Argument Definition ip=[ADDRESS]
The IP address or hostname of the HSS controller port=80
The Port of the HSS controller. Must be 80
systems-uri=/redfish/v1/Systems/1
The URI of the Systems object. Must be /redfish/v1/Systems/1
ssl-insecure=true
Instructs the use of an insecure SSL exchange. Must be true
username=[USER]
The user name for connecting to the HSS controller password=[PASSWORD]
the password for connecting to the HSS controller For example, setting up the Redfish fencing agent on rabbit-compute-2
with the redfish service at 192.168.0.1
. This needs to be run on one node in the cluster.
pcs stonith create rabbit-compute-2 fence_redfish pcmk_host_list=rabbit-compute-2 ip=192.168.0.1 systems-uri=/redfish/v1/Systems/1 username=root password=password ssl_insecure=true\n
"},{"location":"guides/ha-cluster/readme/#dummy-fencing","title":"Dummy Fencing","text":"The dummy fencing agent from ClusterLabs can be used for nodes in the cluster for an early access development system.
"},{"location":"guides/ha-cluster/readme/#configuring-a-gfs2-file-system-in-a-cluster","title":"Configuring a GFS2 file system in a cluster","text":"Follow steps 1-8 of the procedure from Red Hat: Configuring a GFS2 file system in a cluster.
"},{"location":"guides/initial-setup/readme/","title":"Initial Setup Instructions","text":"Instructions for the initial setup of a Rabbit are included in this document.
"},{"location":"guides/initial-setup/readme/#lvm-configuration-on-rabbit","title":"LVM Configuration on Rabbit","text":"LVM Details Running LVM commands (lvcreate/lvremove) on a Rabbit to create logical volumes is problematic if those commands run within a container. Rabbit Storage Orchestration code contained in the nnf-node-manager
Kubernetes pod executes LVM commands from within the container. The problem is that the LVM create/remove commands wait for a UDEV confirmation cookie that is set when UDEV rules run within the host OS. These cookies are not synchronized with the containers where the LVM commands execute.
3 options to solve this problem are:
- Disable UDEV sync at the host operating system level
- Disable UDEV sync using the
\u2013noudevsync
command option for each LVM command - Clear the UDEV cookie using the
dmsetup udevcomplete_all
command after the lvcreate/lvremove command.
Taking these in reverse order using option 3 above which allows UDEV settings within the host OS to remain unchanged from the default, one would need to start the dmsetup
command on a separate thread because the LVM create/remove command waits for the UDEV cookie. This opens too many error paths, so it was rejected.
Option 2 allows UDEV settings within the host OS to remain unchanged from the default, but the use of UDEV within production Rabbit systems is viewed as unnecessary because the host OS is PXE-booted onto the node vs loaded from an device that is discovered by UDEV.
Option 1 above is what we chose to implement because it is the simplest. The following sections discuss this setting.
In order for LVM commands to run within the container environment on a Rabbit, the following change is required to the /etc/lvm/lvm.conf
file on Rabbit.
sed -i 's/udev_sync = 1/udev_sync = 0/g' /etc/lvm/lvm.conf\n
"},{"location":"guides/initial-setup/readme/#zfs","title":"ZFS","text":"ZFS kernel module must be enabled to run on boot. This can be done by creating a file, zfs.conf
, containing the string \"zfs\" in your systems modules-load.d directory.
echo \"zfs\" > /etc/modules-load.d/zfs.conf\n
"},{"location":"guides/initial-setup/readme/#kubernetes-initial-setup","title":"Kubernetes Initial Setup","text":"Installation of Kubernetes (k8s) nodes proceeds by installing k8s components onto the master node(s) of the cluster, then installing k8s components onto the worker nodes and joining those workers to the cluster. The k8s cluster setup for Rabbit requires 3 distinct k8s node types for operation:
- Master: 1 or more master nodes which serve as the Kubernetes API server and control access to the system. For HA, at least 3 nodes should be dedicated to this role.
- Worker: 1 or more worker nodes which run the system level controller manager (SLCM) and Data Workflow Services (DWS) pods. In production, at least 3 nodes should be dedicated to this role.
- Rabbit: 1 or more Rabbit nodes which run the node level controller manager (NLCM) code. The NLCM daemonset pods are exclusively scheduled on Rabbit nodes. All Rabbit nodes are joined to the cluster as k8s workers, and they are tainted to restrict the type of work that may be scheduled on them. The NLCM pod has a toleration that allows it to run on the tainted (i.e. Rabbit) nodes.
"},{"location":"guides/initial-setup/readme/#kubernetes-node-labels","title":"Kubernetes Node Labels","text":"Node Type Node Label Generic Kubernetes Worker Node cray.nnf.manager=true Rabbit Node cray.nnf.node=true"},{"location":"guides/initial-setup/readme/#kubernetes-node-taints","title":"Kubernetes Node Taints","text":"Node Type Node Label Rabbit Node cray.nnf.node=true:NoSchedule See Taints and Tolerations. The SystemConfiguration controller will handle node taints and labels for the rabbit nodes based on the contents of the SystemConfiguration resource described below.
"},{"location":"guides/initial-setup/readme/#rabbit-system-configuration","title":"Rabbit System Configuration","text":"The SystemConfiguration Custom Resource Definition (CRD) is a DWS resource that describes the hardware layout of the whole system. It is expected that an administrator creates a single SystemConfiguration resource when the system is being set up. There is no need to update the SystemConfiguration resource unless hardware is added to or removed from the system.
System Configuration Details Rabbit software looks for a SystemConfiguration named default
in the default
namespace. This resource contains a list of compute nodes and storage nodes, and it describes the mapping between them. There are two different consumers of the SystemConfiguration resource in the NNF software:
NnfNodeReconciler
- The reconciler for the NnfNode resource running on the Rabbit nodes reads the SystemConfiguration resource. It uses the Storage to compute mapping information to fill in the HostName section of the NnfNode resource. This information is then used to populate the DWS Storage resource.
NnfSystemConfigurationReconciler
- This reconciler runs in the nnf-controller-manager
. It creates a Namespace for each compute node listed in the SystemConfiguration. These namespaces are used by the client mount code.
Here is an example SystemConfiguration
:
Spec Section Notes computeNodes List of names of compute nodes in the system storageNodes List of Rabbits and the compute nodes attached storageNodes[].type Must be \"Rabbit\" storageNodes[].computeAccess List of {slot, compute name} elements that indicate physical slot index that the named compute node is attached to apiVersion: dataworkflowservices.github.io/v1alpha2\nkind: SystemConfiguration\nmetadata:\n name: default\n namespace: default\nspec:\n computeNodes:\n - name: compute-01\n - name: compute-02\n - name: compute-03\n - name: compute-04\n ports:\n - 5000-5999\n portsCooldownInSeconds: 0\n storageNodes:\n - computesAccess:\n - index: 0\n name: compute-01\n - index: 1\n name: compute-02\n - index: 6\n name: compute-03\n name: rabbit-name-01\n type: Rabbit\n - computesAccess:\n - index: 4\n name: compute-04\n name: rabbit-name-02\n type: Rabbit\n
"},{"location":"guides/node-management/drain/","title":"Disable Or Drain A Node","text":""},{"location":"guides/node-management/drain/#disabling-a-node","title":"Disabling a node","text":"A Rabbit node can be manually disabled, indicating to the WLM that it should not schedule more jobs on the node. Jobs currently on the node will be allowed to complete at the discretion of the WLM.
Disable a node by setting its Storage state to Disabled
.
kubectl patch storage $NODE --type=json -p '[{\"op\":\"replace\", \"path\":\"/spec/state\", \"value\": \"Disabled\"}]'\n
When the Storage is queried by the WLM, it will show the disabled status.
$ kubectl get storages\nNAME STATE STATUS MODE AGE\nkind-worker2 Enabled Ready Live 10m\nkind-worker3 Disabled Disabled Live 10m\n
To re-enable a node, set its Storage state to Enabled
.
kubectl patch storage $NODE --type=json -p '[{\"op\":\"replace\", \"path\":\"/spec/state\", \"value\": \"Enabled\"}]'\n
The Storage state will show that it is enabled.
kubectl get storages\nNAME STATE STATUS MODE AGE\nkind-worker2 Enabled Ready Live 10m\nkind-worker3 Enabled Ready Live 10m\n
"},{"location":"guides/node-management/drain/#draining-a-node","title":"Draining a node","text":"The NNF software consists of a collection of DaemonSets and Deployments. The pods on the Rabbit nodes are usually from DaemonSets. Because of this, the kubectl drain
command is not able to remove the NNF software from a node. See Safely Drain a Node for details about the limitations posed by DaemonSet pods.
Given the limitations of DaemonSets, the NNF software will be drained by using taints, as described in Taints and Tolerations.
This would be used only after the WLM jobs have been removed from that Rabbit (preferably) and there is some reason to also remove the NNF software from it. This might be used before a Rabbit is powered off and pulled out of the cabinet, for example, to avoid leaving pods in \"Terminating\" state (harmless, but it's noise).
If an admin used this taint before power-off it would mean there wouldn't be \"Terminating\" pods lying around for that Rabbit. After a new/same Rabbit is put back in its place, the NNF software won't jump back on it while the taint is present. The taint can be removed at any time, from immediately after the node is powered off up to some time after the new/same Rabbit is powered back on.
"},{"location":"guides/node-management/drain/#drain-nnf-pods-from-a-rabbit-node","title":"Drain NNF pods from a rabbit node","text":"Drain the NNF software from a node by applying the cray.nnf.node.drain
taint. The CSI driver pods will remain on the node to satisfy any unmount requests from k8s as it cleans up the NNF pods.
kubectl taint node $NODE cray.nnf.node.drain=true:NoSchedule cray.nnf.node.drain=true:NoExecute\n
This will cause the node's Storage
resource to be drained:
$ kubectl get storages\nNAME STATE STATUS MODE AGE\nkind-worker2 Enabled Drained Live 5m44s\nkind-worker3 Enabled Ready Live 5m45s\n
The Storage
resource will contain the following message indicating the reason it has been drained:
$ kubectl get storages rabbit1 -o json | jq -rM .status.message\nKubernetes node is tainted with cray.nnf.node.drain\n
To restore the node to service, remove the cray.nnf.node.drain
taint.
kubectl taint node $NODE cray.nnf.node.drain-\n
The Storage
resource will revert to a Ready
status.
"},{"location":"guides/node-management/drain/#the-csi-driver","title":"The CSI driver","text":"While the CSI driver pods may be drained from a Rabbit node, it is inadvisable to do so.
Warning K8s relies on the CSI driver to unmount any filesystems that may have been mounted into a pod's namespace. If it is not present when k8s is attempting to remove a pod then the pod may be left in \"Terminating\" state. This is most obvious when draining the nnf-dm-worker
pods which usually have filesystems mounted in them.
Drain the CSI driver pod from a node by applying the cray.nnf.node.drain.csi
taint.
kubectl taint node $NODE cray.nnf.node.drain.csi=true:NoSchedule cray.nnf.node.drain.csi=true:NoExecute\n
To restore the CSI driver pods to that node, remove the cray.nnf.node.drain.csi
taint.
kubectl taint node $NODE cray.nnf.node.drain.csi-\n
This taint will also drain the remaining NNF software if has not already been drained by the cray.nnf.node.drain
taint.
"},{"location":"guides/node-management/nvme-namespaces/","title":"Debugging NVMe Namespaces","text":""},{"location":"guides/node-management/nvme-namespaces/#total-space-available-or-used","title":"Total Space Available or Used","text":"Find the total space available, and the total space used, on a Rabbit node using the Redfish API. One way to access the API is to use the nnf-node-manager
pod on that node.
To view the space on node ee50, find its nnf-node-manager
pod and then exec into it to query the Redfish API:
[richerso@ee1:~]$ kubectl get pods -A -o wide | grep ee50 | grep node-manager\nnnf-system nnf-node-manager-jhglm 1/1 Running 0 61m 10.85.71.11 ee50 <none> <none>\n
Then query the Redfish API to view the AllocatedBytes
and GuaranteedBytes
:
[richerso@ee1:~]$ kubectl exec --stdin --tty -n nnf-system nnf-node-manager-jhglm -- curl -S localhost:50057/redfish/v1/StorageServices/NNF/CapacitySource | jq\n{\n \"@odata.id\": \"/redfish/v1/StorageServices/NNF/CapacitySource\",\n \"@odata.type\": \"#CapacitySource.v1_0_0.CapacitySource\",\n \"Id\": \"0\",\n \"Name\": \"Capacity Source\",\n \"ProvidedCapacity\": {\n \"Data\": {\n \"AllocatedBytes\": 128849888,\n \"ConsumedBytes\": 128849888,\n \"GuaranteedBytes\": 307132496928,\n \"ProvisionedBytes\": 307261342816\n },\n \"Metadata\": {},\n \"Snapshot\": {}\n },\n \"ProvidedClassOfService\": {},\n \"ProvidingDrives\": {},\n \"ProvidingPools\": {},\n \"ProvidingVolumes\": {},\n \"Actions\": {},\n \"ProvidingMemory\": {},\n \"ProvidingMemoryChunks\": {}\n}\n
"},{"location":"guides/node-management/nvme-namespaces/#total-orphaned-or-leaked-space","title":"Total Orphaned or Leaked Space","text":"To determine the amount of orphaned space, look at the Rabbit node when there are no allocations on it. If there are no allocations then there should be no NnfNodeBlockStorages
in the k8s namespace with the Rabbit's name:
[richerso@ee1:~]$ kubectl get nnfnodeblockstorage -n ee50\nNo resources found in ee50 namespace.\n
To check that there are no orphaned namespaces, you can use the nvme command while logged into that Rabbit node:
[root@ee50:~]# nvme list\nNode SN Model Namespace Usage Format FW Rev\n--------------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------\n/dev/nvme0n1 S666NN0TB11877 SAMSUNG MZ1L21T9HCLS-00A07 1 8.57 GB / 1.92 TB 512 B + 0 B GDC7302Q\n
There should be no namespaces on the kioxia drives:
[root@ee50:~]# nvme list | grep -i kioxia\n[root@ee50:~]#\n
If there are namespaces listed, and there weren't any NnfNodeBlockStorages
on the node, then they need to be deleted through the Rabbit software. The NnfNodeECData
resource is a persistent data store for the allocations that should exist on the Rabbit. By deleting it, and then deleting the nnf-node-manager pod, it causes nnf-node-manager to delete the orphaned namespaces. This can take a few minutes after you actually delete the pod:
kubectl delete nnfnodeecdata ec-data -n ee50\nkubectl delete pod -n nnf-system nnf-node-manager-jhglm\n
"},{"location":"guides/rbac-for-users/readme/","title":"RBAC: Role-Based Access Control","text":"RBAC (Role Based Access Control) determines the operations a user or service can perform on a list of Kubernetes resources. RBAC affects everything that interacts with the kube-apiserver (both users and services internal or external to the cluster). More information about RBAC can be found in the Kubernetes documentation.
"},{"location":"guides/rbac-for-users/readme/#rbac-for-users","title":"RBAC for Users","text":"This section shows how to create a kubeconfig file with RBAC set up to restrict access to view only for resources.
"},{"location":"guides/rbac-for-users/readme/#overview","title":"Overview","text":"User access to a Kubernetes cluster is defined through a kubeconfig file. This file contains the address of the kube-apiserver as well as the key and certificate for the user. Typically this file is located in ~/.kube/config
. When a kubernetes cluster is created, a config file is generated for the admin that allows unrestricted access to all resources in the cluster. This is the equivalent of root
on a Linux system.
The goal of this document is to create a new kubeconfig file that allows view only access to Kubernetes resources. This kubeconfig file can be shared between the HPE employees to investigate issues on the system. This involves:
- Generating a new key/cert pair for an \"hpe\" user
- Creating a new kubeconfig file
- Adding RBAC rules for the \"hpe\" user to allow read access
"},{"location":"guides/rbac-for-users/readme/#generate-a-key-and-certificate","title":"Generate a Key and Certificate","text":"The first step is to create a new key and certificate so that HPE employees can authenticate as the \"hpe\" user. This will likely be done on one of the master nodes. The openssl
command needs access to the certificate authority file. This is typically located in /etc/kubernetes/pki
.
# make a temporary work space\nmkdir /tmp/rabbit\ncd /tmp/rabbit\n\n# Create this user\nexport USERNAME=hpe\n\n# generate a new key\nopenssl genrsa -out rabbit.key 2048\n\n# create a certificate signing request for this user\nopenssl req -new -key rabbit.key -out rabbit.csr -subj \"/CN=$USERNAME\"\n\n# generate a certificate using the certificate authority on the k8s cluster. This certificate lasts 500 days\nopenssl x509 -req -in rabbit.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out rabbit.crt -days 500\n
"},{"location":"guides/rbac-for-users/readme/#create-a-kubeconfig","title":"Create a kubeconfig","text":"After the keys have been generated, a new kubeconfig file can be created for this user. The admin kubeconfig /etc/kubernetes/admin.conf
can be used to determine the cluster name kube-apiserver address.
# create a new kubeconfig with the server information\nkubectl config set-cluster $CLUSTER_NAME --kubeconfig=/tmp/rabbit/rabbit.conf --server=$SERVER_ADDRESS --certificate-authority=/etc/kubernetes/pki/ca.crt --embed-certs=true\n\n# add the key and cert for this user to the config\nkubectl config set-credentials $USERNAME --kubeconfig=/tmp/rabbit/rabbit.conf --client-certificate=/tmp/rabbit/rabbit.crt --client-key=/tmp/rabbit/rabbit.key --embed-certs=true\n\n# add a context\nkubectl config set-context $USERNAME --kubeconfig=/tmp/rabbit/rabbit.conf --cluster=$CLUSTER_NAME --user=$USERNAME\n
The kubeconfig file should be placed in a location where HPE employees have read access to it.
"},{"location":"guides/rbac-for-users/readme/#create-clusterrole-and-clusterrolebinding","title":"Create ClusterRole and ClusterRoleBinding","text":"The next step is to create ClusterRole and ClusterRoleBinding resources. The ClusterRole provided allows viewing all cluster and namespace scoped resources, but disallows creating, deleting, or modifying any resources.
ClusterRole
apiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: hpe-viewer\nrules:\n - apiGroups: [ \"*\" ]\n resources: [ \"*\" ]\n verbs: [ get, list ]\n
ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: hpe-viewer\nsubjects:\n- kind: User\n name: hpe\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n kind: ClusterRole\n name: hpe-viewer\n apiGroup: rbac.authorization.k8s.io\n
Both of these resources can be created using the kubectl apply
command.
"},{"location":"guides/rbac-for-users/readme/#testing","title":"Testing","text":"Get, List, Create, Delete, and Modify operations can be tested as the \"hpe\" user by setting the KUBECONFIG environment variable to use the new kubeconfig file. Get and List should be the only allowed operations. Other operations should fail with a \"forbidden\" error.
export KUBECONFIG=/tmp/hpe/hpe.conf\n
"},{"location":"guides/rbac-for-users/readme/#rbac-for-workload-manager-wlm","title":"RBAC for Workload Manager (WLM)","text":"Note This section assumes the reader has read and understood the steps described above for setting up RBAC for Users
.
A workload manager (WLM) such as Flux or Slurm will interact with DataWorkflowServices as a privileged user. RBAC is used to limit the operations that a WLM can perform on a Rabbit system.
The following steps are required to create a user and a role for the WLM. In this case, we're creating a user to be used with the Flux WLM:
- Generate a new key/cert pair for a \"flux\" user
- Creating a new kubeconfig file
- Adding RBAC rules for the \"flux\" user to allow appropriate access to the DataWorkflowServices API.
"},{"location":"guides/rbac-for-users/readme/#generate-a-key-and-certificate_1","title":"Generate a Key and Certificate","text":"Generate a key and certificate for our \"flux\" user, similar to the way we created one for the \"hpe\" user above. Substitute \"flux\" in place of \"hpe\".
"},{"location":"guides/rbac-for-users/readme/#create-a-kubeconfig_1","title":"Create a kubeconfig","text":"After the keys have been generated, a new kubeconfig file can be created for the \"flux\" user, similar to the one for the \"hpe\" user above. Again, substitute \"flux\" in place of \"hpe\".
"},{"location":"guides/rbac-for-users/readme/#use-the-provided-clusterrole-and-create-a-clusterrolebinding","title":"Use the provided ClusterRole and create a ClusterRoleBinding","text":"DataWorkflowServices has already defined the role to be used with WLMs, named dws-workload-manager
:
kubectl get clusterrole dws-workload-manager\n
If the \"flux\" user requires only the normal WLM permissions, then create and apply a ClusterRoleBinding to associate the \"flux\" user with the dws-workload-manager
ClusterRole.
The `dws-workload-manager role is defined in workload_manager_role.yaml.
ClusterRoleBinding for WLM permissions only:
apiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: flux\nsubjects:\n- kind: User\n name: flux\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n kind: ClusterRole\n name: dws-workload-manager\n apiGroup: rbac.authorization.k8s.io\n
If the \"flux\" user requires the normal WLM permissions as well as some of the NNF permissions, perhaps to collect some NNF resources for debugging, then create and apply a ClusterRoleBinding to associate the \"flux\" user with the nnf-workload-manager
ClusterRole.
The nnf-workload-manager
role is defined in workload_manager_nnf_role.yaml.
ClusterRoleBinding for WLM and NNF permissions:
apiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: flux\nsubjects:\n- kind: User\n name: flux\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n kind: ClusterRole\n name: nnf-workload-manager\n apiGroup: rbac.authorization.k8s.io\n
The WLM should then use the kubeconfig file associated with this \"flux\" user to access the DataWorkflowServices API and the Rabbit system.
"},{"location":"guides/storage-profiles/readme/","title":"Storage Profile Overview","text":"Storage Profiles allow for customization of the Rabbit storage provisioning process. Examples of content that can be customized via storage profiles is
- The RAID type used for storage
- Any mkfs or LVM args used
- An external MGS NID for Lustre
- A boolean value indicating the Lustre MGT and MDT should be combined on the same target device
DW directives that allocate storage on Rabbit nodes allow a profile
parameter to be specified to control how the storage is configured. NNF software provides a set of canned profiles to choose from, and the administrator may create more profiles.
The administrator shall choose one profile to be the default profile that is used when a profile parameter is not specified.
"},{"location":"guides/storage-profiles/readme/#specifying-a-profile","title":"Specifying a Profile","text":"To specify a profile name on a #DW directive, use the profile
option
#DW jobdw type=lustre profile=durable capacity=5GB name=example\n
"},{"location":"guides/storage-profiles/readme/#setting-a-default-profile","title":"Setting A Default Profile","text":"A default profile must be defined at all times. Any #DW line that does not specify a profile will use the default profile. If a default profile is not defined, then any new workflows will be rejected. If more than one profile is marked as default then any new workflows will be rejected.
To query existing profiles
$ kubectl get nnfstorageprofiles -A\nNAMESPACE NAME DEFAULT AGE\nnnf-system durable true 14s\nnnf-system performance false 6s\n
To set the default flag on a profile
$ kubectl patch nnfstorageprofile performance -n nnf-system --type merge -p '{\"data\":{\"default\":true}}'\n
To clear the default flag on a profile
$ kubectl patch nnfstorageprofile durable -n nnf-system --type merge -p '{\"data\":{\"default\":false}}'\n
"},{"location":"guides/storage-profiles/readme/#creating-the-initial-default-profile","title":"Creating The Initial Default Profile","text":"Create the initial default profile from scratch or by using the NnfStorageProfile/template resource as a template. If nnf-deploy
was used to install nnf-sos then the default profile described below will have been created automatically.
To use the template
resource begin by obtaining a copy of it either from the nnf-sos repo or from a live system. To get it from a live system use the following command:
kubectl get nnfstorageprofile -n nnf-system template -o yaml > profile.yaml\n
Edit the profile.yaml
file to trim the metadata section to contain only a name and namespace. The namespace must be left as nnf-system, but the name should be set to signify that this is the new default profile. In this example we will name it default
. The metadata section will look like the following, and will contain no other fields:
metadata:\n name: default\n namespace: nnf-system\n
Mark this new profile as the default profile by setting default: true
in the data section of the resource:
data:\n default: true\n
Apply this resource to the system and verify that it is the only one marked as the default resource:
kubectl get nnfstorageprofile -A\n
The output will appear similar to the following:
NAMESPACE NAME DEFAULT AGE\nnnf-system default true 9s\nnnf-system template false 11s\n
The administrator should edit the default
profile to record any cluster-specific settings. Maintain a copy of this resource YAML in a safe place so it isn't lost across upgrades.
"},{"location":"guides/storage-profiles/readme/#keeping-the-default-profile-updated","title":"Keeping The Default Profile Updated","text":"An upgrade of nnf-sos may include updates to the template
profile. It may be necessary to manually copy these updates into the default
profile.
"},{"location":"guides/storage-profiles/readme/#profile-parameters","title":"Profile Parameters","text":""},{"location":"guides/storage-profiles/readme/#xfs","title":"XFS","text":"The following shows how to specify command line options for pvcreate, vgcreate, lvcreate, and mkfs for XFS storage. Optional mount options are specified one per line
apiVersion: nnf.cray.hpe.com/v1alpha1\nkind: NnfStorageProfile\nmetadata:\n name: xfs-stripe-example\n namespace: nnf-system\ndata:\n[...]\n xfsStorage:\n commandlines:\n pvCreate: $DEVICE\n vgCreate: $VG_NAME $DEVICE_LIST\n lvCreate: -l 100%VG --stripes $DEVICE_NUM --stripesize=32KiB --name $LV_NAME $VG_NAME\n mkfs: $DEVICE\n options:\n mountRabbit:\n - noatime\n - nodiratime\n[...]\n
"},{"location":"guides/storage-profiles/readme/#gfs2","title":"GFS2","text":"The following shows how to specify command line options for pvcreate, lvcreate, and mkfs for GFS2.
apiVersion: nnf.cray.hpe.com/v1alpha1\nkind: NnfStorageProfile\nmetadata:\n name: gfs2-stripe-example\n namespace: nnf-system\ndata:\n[...]\n gfs2Storage:\n commandlines:\n pvCreate: $DEVICE\n vgCreate: $VG_NAME $DEVICE_LIST\n lvCreate: -l 100%VG --stripes $DEVICE_NUM --stripesize=32KiB --name $LV_NAME $VG_NAME\n mkfs: -j2 -p $PROTOCOL -t $CLUSTER_NAME:$LOCK_SPACE $DEVICE\n[...]\n
"},{"location":"guides/storage-profiles/readme/#lustre-zfs","title":"Lustre / ZFS","text":"The following shows how to specify a zpool virtual device (vdev). In this case the default vdev is a stripe. See zpoolconcepts(7) for virtual device descriptions.
apiVersion: nnf.cray.hpe.com/v1alpha1\nkind: NnfStorageProfile\nmetadata:\n name: zpool-stripe-example\n namespace: nnf-system\ndata:\n[...]\n lustreStorage:\n mgtCommandlines:\n zpoolCreate: -O canmount=off -o cachefile=none $POOL_NAME $DEVICE_LIST\n mkfs: --mgs $VOL_NAME\n mdtCommandlines:\n zpoolCreate: -O canmount=off -o cachefile=none $POOL_NAME $DEVICE_LIST\n mkfs: --mdt --fsname=$FS_NAME --mgsnode=$MGS_NID --index=$INDEX $VOL_NAME\n mgtMdtCommandlines:\n zpoolCreate: -O canmount=off -o cachefile=none $POOL_NAME $DEVICE_LIST\n mkfs: --mgs --mdt --fsname=$FS_NAME --index=$INDEX $VOL_NAME\n ostCommandlines:\n zpoolCreate: -O canmount=off -o cachefile=none $POOL_NAME $DEVICE_LIST\n mkfs: --ost --fsname=$FS_NAME --mgsnode=$MGS_NID --index=$INDEX $VOL_NAME\n[...]\n
"},{"location":"guides/storage-profiles/readme/#zfs-dataset-properties","title":"ZFS dataset properties","text":"The following shows how to specify ZFS dataset properties in the --mkfsoptions
arg for mkfs.lustre. See zfsprops(7).
apiVersion: nnf.cray.hpe.com/v1alpha1\nkind: NnfStorageProfile\nmetadata:\n name: zpool-stripe-example\n namespace: nnf-system\ndata:\n[...]\n lustreStorage:\n[...]\n ostCommandlines:\n zpoolCreate: -O canmount=off -o cachefile=none $POOL_NAME $DEVICE_LIST\n mkfs: --ost --mkfsoptions=\"recordsize=1024K -o compression=lz4\" --fsname=$FS_NAME --mgsnode=$MGS_NID --index=$INDEX $VOL_NAME\n[...]\n
"},{"location":"guides/storage-profiles/readme/#mount-options-for-targets","title":"Mount Options for Targets","text":""},{"location":"guides/storage-profiles/readme/#persistent-mount-options","title":"Persistent Mount Options","text":"Use the mkfs.lustre --mountfsoptions
parameter to set persistent mount options for Lustre targets.
apiVersion: nnf.cray.hpe.com/v1alpha1\nkind: NnfStorageProfile\nmetadata:\n name: target-mount-option-example\n namespace: nnf-system\ndata:\n[...]\n lustreStorage:\n[...]\n ostCommandlines:\n zpoolCreate: -O canmount=off -o cachefile=none $POOL_NAME $DEVICE_LIST\n mkfs: --ost --mountfsoptions=\"errors=remount-ro,mballoc\" --mkfsoptions=\"recordsize=1024K -o compression=lz4\" --fsname=$FS_NAME --mgsnode=$MGS_NID --index=$INDEX $VOL_NAME\n[...]\n
"},{"location":"guides/storage-profiles/readme/#non-persistent-mount-options","title":"Non-Persistent Mount Options","text":"Non-persistent mount options can be specified with the ostOptions.mountTarget parameter to the NnfStorageProfile:
apiVersion: nnf.cray.hpe.com/v1alpha1\nkind: NnfStorageProfile\nmetadata:\n name: target-mount-option-example\n namespace: nnf-system\ndata:\n[...]\n lustreStorage:\n[...]\n ostCommandlines:\n zpoolCreate: -O canmount=off -o cachefile=none $POOL_NAME $DEVICE_LIST\n mkfs: --ost --mountfsoptions=\"errors=remount-ro\" --mkfsoptions=\"recordsize=1024K -o compression=lz4\" --fsname=$FS_NAME --mgsnode=$MGS_NID --index=$INDEX $VOL_NAME\n ostOptions:\n mountTarget:\n - mballoc\n[...]\n
"},{"location":"guides/storage-profiles/readme/#target-layout","title":"Target Layout","text":"Users may want Lustre file systems with different performance characteristics. For example, a user job with a single compute node accessing the Lustre file system would see acceptable performance from a single OSS. An FPP workload might want as many OSSs as posible to avoid contention.
The NnfStorageProfile
allows admins to specify where and how many Lustre targets are allocated by the WLM. During the proposal phase of the workflow, the NNF software uses the information in the NnfStorageProfile
to add extra constraints in the DirectiveBreakdown
. The WLM uses these constraints when picking storage.
The NnfStorageProfile
has three fields in the mgtOptions
, mdtOptions
, and ostOptions
to specify target layout. The fields are:
count
- A static value for how many Lustre targets to create. scale
- A value from 1-10 that the WLM can use to determine how many Lustre targets to allocate. This is up to the WLM and the admins to agree on how to interpret this field. A value of 1 might indicate the minimum number of NNF nodes needed to reach the minimum capacity, while 10 might result in a Lustre target on every Rabbit attached to the computes in the job. Scale takes into account allocation size, compute node count, and Rabbit count. colocateComputes
- true/false value. When \"true\", this adds a location constraint in the DirectiveBreakdown
that limits the WLM to picking storage with a physical connection to the compute resources. In practice this means that Rabbit storage is restricted to the chassis used by the job. This can be set individually for each of the Lustre target types. When this is \"false\", any Rabbit storage can be picked, even if the Rabbit doesn't share a chassis with any of the compute nodes in the job.
Only one of scale
and count
can be set for a particular target type.
The DirectiveBreakdown
for create_persistent
#DWs won't include the constraint from colocateCompute=true
since there may not be any compute nodes associated with the job.
apiVersion: nnf.cray.hpe.com/v1alpha1\nkind: NnfStorageProfile\nmetadata:\n name: high-metadata\n namespace: default\ndata:\n default: false\n...\n lustreStorage:\n combinedMgtMdt: false\n capacityMdt: 500GiB\n capacityMgt: 1GiB\n[...]\n ostOptions:\n scale: 5\n colocateComputes: true\n mdtOptions:\n count: 10\n
"},{"location":"guides/storage-profiles/readme/#example-layouts","title":"Example Layouts","text":"scale
with colocateComputes=true
will likely be the most common layout type to use for jobdw
directives. This will result in a Lustre file system whose performance scales with the number of compute nodes in the job.
count
may be used when a specific performance characteristic is desired such as a single shared file workload that has low metadata requirements and only needs a single MDT. It may also be useful when a consistently performing file system is required across different jobs.
colocatedComputes=false
may be useful for placing MDTs on NNF nodes without an OST (within the same file system).
The count
field may be useful when creating a persistent file system since the job with the create_persistent
directive may only have a single compute node.
In general, scale
gives a simple way for users to get a filesystem that has performance consistent with their job size. count
is useful for times when a user wants full control of the file system layout.
"},{"location":"guides/storage-profiles/readme/#command-line-variables","title":"Command Line Variables","text":""},{"location":"guides/storage-profiles/readme/#pvcreate","title":"pvcreate","text":" $DEVICE
- expands to the /dev/<path>
value for one device that has been allocated
"},{"location":"guides/storage-profiles/readme/#vgcreate","title":"vgcreate","text":" $VG_NAME
- expands to a volume group name that is controlled by Rabbit software. $DEVICE_LIST
- expands to a list of space-separated /dev/<path>
devices. This list will contain the devices that were iterated over for the pvcreate step.
"},{"location":"guides/storage-profiles/readme/#lvcreate","title":"lvcreate","text":" $VG_NAME
- see vgcreate above. $LV_NAME
- expands to a logical volume name that is controlled by Rabbit software. $DEVICE_NUM
- expands to a number indicating the number of devices allocated for the volume group. $DEVICE1, $DEVICE2, ..., $DEVICEn
- each expands to one of the devices from the $DEVICE_LIST
above.
"},{"location":"guides/storage-profiles/readme/#xfs-mkfs","title":"XFS mkfs","text":" $DEVICE
- expands to the /dev/<path>
value for the logical volume that was created by the lvcreate step above.
"},{"location":"guides/storage-profiles/readme/#gfs2-mkfs","title":"GFS2 mkfs","text":" $DEVICE
- expands to the /dev/<path>
value for the logical volume that was created by the lvcreate step above. $CLUSTER_NAME
- expands to a cluster name that is controlled by Rabbit Software $LOCK_SPACE
- expands to a lock space key that is controlled by Rabbit Software. $PROTOCOL
- expands to a locking protocol that is controlled by Rabbit Software.
"},{"location":"guides/storage-profiles/readme/#zpool-create","title":"zpool create","text":" $DEVICE_LIST
- expands to a list of space-separated /dev/<path>
devices. This list will contain the devices that were allocated for this storage request. $POOL_NAME
- expands to a pool name that is controlled by Rabbit software. $DEVICE_NUM
- expands to a number indicating the number of devices allocated for this storage request. $DEVICE1, $DEVICE2, ..., $DEVICEn
- each expands to one of the devices from the $DEVICE_LIST
above.
"},{"location":"guides/storage-profiles/readme/#lustre-mkfs","title":"lustre mkfs","text":" $FS_NAME
- expands to the filesystem name that was passed to Rabbit software from the workflow's #DW line. $MGS_NID
- expands to the NID of the MGS. If the MGS was orchestrated by nnf-sos then an appropriate internal value will be used. $POOL_NAME
- see zpool create above. $VOL_NAME
- expands to the volume name that will be created. This value will be <pool_name>/<dataset>
, and is controlled by Rabbit software. $INDEX
- expands to the index value of the target and is controlled by Rabbit software.
"},{"location":"guides/system-storage/readme/","title":"System Storage","text":""},{"location":"guides/system-storage/readme/#background","title":"Background","text":"System storage allows an admin to configure Rabbit storage without a DWS workflow. This is useful for making storage that is outside the scope of any job. One use case for system storage is to create a pair of LVM VGs on the Rabbit nodes that can be used to work around an lvmlockd
bug. The lockspace for the VGs can be started on the compute nodes, holding the lvm_global
lock open while other Rabbit VG lockspaces are started and stopped.
"},{"location":"guides/system-storage/readme/#nnfsystemstorage-resource","title":"NnfSystemStorage Resource","text":"System storage is created through the NnfSystemStorage
resource. By default, system storage creates an allocation on all Rabbits in the system and exposes the storage to all compute. This behavior can be modified through different fields in the NnfSystemStorage
resource. A NnfSystemStorage
storage resource has the following fields in its Spec
section:
Field Required Default Value Notes SystemConfiguration
No Empty ObjectReference
to the SystemConfiguration
to use By default, the default
/default
SystemConfiguration
is used IncludeRabbits
No Empty A list of Rabbit node names Rather than use all the Rabbits in the SystemConfiguration
, only use the Rabbits contained in this list ExcludeRabbits
No Empty A list of Rabbit node names Use all the Rabbits in the SystemConfiguration
except those contained in this list. IncludeComputes
No Empty A list of compute node names Rather than use the SystemConfiguration
to determine which computes are attached to the Rabbit nodes being used, only use the compute nodes contained in this list ExcludeComputes
No Empty A list of compute node names Use the SystemConfiguration
to determine which computes are attached to the Rabbits being used, but omit the computes contained in this list ComputesTarget
Yes all
all
,even
,odd
,pattern
Only use certain compute nodes based on their index as determined from the SystemConfiguration
. all
uses all computes. even
uses computes with an even index. odd
uses computes with an odd index. pattern
uses computes with the indexes specified in Spec.ComputesPattern
ComputesPattern
No Empty A list of integers [0-15] If ComputesTarget
is pattern
, then the storage is made available on compute nodes with the indexes specified in this list. Capacity
Yes 1073741824
Integer Number of bytes to allocate per Rabbit Type
Yes raw
raw
, xfs
, gfs2
Type of file system to create on the Rabbit storage StorageProfile
Yes None ObjectReference
to an NnfStorageProfile
. This storage profile must be marked as pinned
MakeClientMounts
Yes false
Create ClientMount
resources to mount the storage on the compute nodes. If this is false
, then the devices are made available to the compute nodes without mounting the file system ClientMountPath
No None Path to mount the file system on the compute nodes NnfSystemResources
can be created in any namespace.
"},{"location":"guides/system-storage/readme/#example","title":"Example","text":"apiVersion: nnf.cray.hpe.com/v1alpha1\nkind: NnfSystemStorage\nmetadata:\n name: gfs2-systemstorage\n namespace: systemstorage\nspec:\n excludeRabbits:\n - \"rabbit-1\"\n - \"rabbit-9\"\n - \"rabbit-14\"\n excludeComputes:\n - \"compute-32\"\n - \"compute-49\"\n type: \"gfs2\"\n capacity: 10000000000\n computesTarget: \"pattern\"\n computesPattern:\n - 0\n - 1\n - 2\n - 3\n - 4\n - 5\n - 6\n - 7\n makeClientMounts: true\n clientMountPath: \"/mnt/nnf/gfs2\"\n storageProfile:\n name: gfs2-systemstorage\n namespace: systemstorage\n kind: NnfStorageProfile\n
"},{"location":"guides/system-storage/readme/#lvmlockd-workaround","title":"lvmlockd Workaround","text":"System storage can be used to workaround an lvmlockd
bug that occurs when trying to start the lvm_global
lockspace. The lvm_global
lockspace is started only when there is a volume group lockspace that is started. After the last volume group lockspace is stopped, then the lvm_global
lockspace is stopped as well. To prevent the lvm_global
lockspace from being started and stopped so often, a volume group is created on the Rabbits and shared with the computes. The compute nodes can start the volume group lockspace and leave it open.
The system storage can also be used to check whether the PCIe cables are attached correctly between the Rabbit and compute nodes. If the cables are incorrect, then the PCIe switch will make NVMe namespaces available to the wrong compute node. An incorrect cable can only result in compute nodes that have PCIe connections switched with the other compute node in its pair. By creating two system storages, one for compute nodes with an even index, and one for compute nodes with an odd index, the PCIe connection can be verified by checking that the correct system storage is visible on a compute node.
"},{"location":"guides/system-storage/readme/#example_1","title":"Example","text":"The following example resources show how to create two system storages to use for the lvmlockd
workaround. Each system storage creates a raw
allocation with a volume group but no logical volume. This is the minimum LVM set up needed to start a lockspace on the compute nodes. A NnfStorageProfile
is created for each of the system storages. The NnfStorageProfile
specifies a tag during the vgcreate
that is used to differentiate between the two VGs. These resources are created in the systemstorage
namespace, but they could be created in any namespace.
apiVersion: nnf.cray.hpe.com/v1alpha1\nkind: NnfStorageProfile\nmetadata:\n name: lvmlockd_even\n namespace: systemstorage\ndata:\n xfsStorage:\n capacityScalingFactor: \"1.0\"\n lustreStorage:\n capacityScalingFactor: \"1.0\"\n gfs2Storage:\n capacityScalingFactor: \"1.0\"\n default: false\n pinned: true\n rawStorage:\n capacityScalingFactor: \"1.0\"\n commandlines:\n pvCreate: $DEVICE\n pvRemove: $DEVICE\n sharedVg: true\n vgChange:\n lockStart: --lock-start $VG_NAME\n lockStop: --lock-stop $VG_NAME\n vgCreate: --shared --addtag lvmlockd_even $VG_NAME $DEVICE_LIST\n vgRemove: $VG_NAME\n
apiVersion: nnf.cray.hpe.com/v1alpha1\nkind: NnfStorageProfile\nmetadata:\n name: lvmlockd_odd\n namespace: systemstorage\ndata:\n xfsStorage:\n capacityScalingFactor: \"1.0\"\n lustreStorage:\n capacityScalingFactor: \"1.0\"\n gfs2Storage:\n capacityScalingFactor: \"1.0\"\n default: false\n pinned: true\n rawStorage:\n capacityScalingFactor: \"1.0\"\n commandlines:\n pvCreate: $DEVICE\n pvRemove: $DEVICE\n sharedVg: true\n vgChange:\n lockStart: --lock-start $VG_NAME\n lockStop: --lock-stop $VG_NAME\n vgCreate: --shared --addtag lvmlockd_odd $VG_NAME $DEVICE_LIST\n vgRemove: $VG_NAME\n
Note that the NnfStorageProfile
resources are marked as default: false
and pinned: true
. This is required for NnfStorageProfiles
that are used for system storage. The commandLine
fields for LV commands are left empty so that no LV is created.
apiVersion: nnf.cray.hpe.com/v1alpha1\nkind: NnfSystemStorage\nmetadata:\n name: lvmlockd_even\n namespace: systemstorage\nspec:\n type: \"raw\"\n computesTarget: \"even\"\n makeClientMounts: false\n storageProfile:\n name: lvmlockd_even\n namespace: systemstorage\n kind: NnfStorageProfile\n
apiVersion: nnf.cray.hpe.com/v1alpha1\nkind: NnfSystemStorage\nmetadata:\n name: lvmlockd_odd\n namespace: systemstorage\nspec:\n type: \"raw\"\n computesTarget: \"odd\"\n makeClientMounts: false\n storageProfile:\n name: lvmlockd_odd\n namespace: systemstorage\n kind: NnfStorageProfile\n
The two NnfSystemStorage
resources each target all of the Rabbits but a different set of compute nodes. This will result in each Rabbit having two VGs and each compute node having one VG.
After the NnfSystemStorage
resources are created, the Rabbit software will create the storage on the Rabbit nodes and make the LVM VG available to the correct compute nodes. At this point, the status.ready
field will be true
. If an error occurs, the .status.error
field will describe the error.
"},{"location":"guides/user-containers/readme/","title":"NNF User Containers","text":"NNF User Containers are a mechanism to allow user-defined containerized applications to be run on Rabbit nodes with access to NNF ephemeral and persistent storage.
"},{"location":"guides/user-containers/readme/#overview","title":"Overview","text":"Container workflows are orchestrated through the use of two components: Container Profiles and Container Directives. A Container Profile defines the container to be executed. Most importantly, it allows you to specify which NNF storages are accessible within the container and which container image to run. The containers are executed on the NNF nodes that are allocated to your container workflow. These containers can be executed in either of two modes: Non-MPI and MPI.
For Non-MPI applications, the image and command are launched across all the targeted NNF Nodes in a uniform manner. This is useful in simple applications, where non-distributed behavior is desired.
For MPI applications, a single launcher container serves as the point of contact, responsible for distributing tasks to various worker containers. Each of the NNF nodes targeted by the workflow receives its corresponding worker container. The focus of this documentation will be on MPI applications.
To see a full working example before diving into these docs, see Putting It All Together.
"},{"location":"guides/user-containers/readme/#before-creating-a-container-workflow","title":"Before Creating a Container Workflow","text":"Before creating a workflow, a working NnfContainerProfile
must exist. This profile is referenced in the container directive supplied with the workflow.
"},{"location":"guides/user-containers/readme/#container-profiles","title":"Container Profiles","text":"The author of a containerized application will work with the administrator to define a pod specification template for the container and to create an appropriate NnfContainerProfile
resource for the container. The image and tag for the user's container will be specified in the profile.
The image must be available in a registry that is available to your system. This could be docker.io, ghcr.io, etc., or a private registry. Note that for a private registry, some additional setup is required. See here for more info.
The image itself has a few requirements. See here for more info on building images.
New NnfContainerProfile
resources may be created by copying one of the provided example profiles from the nnf-system
namespace . The examples may be found by listing them with kubectl
:
kubectl get nnfcontainerprofiles -n nnf-system\n
The next few subsections provide an overview of the primary components comprising an NnfContainerProfile
. However, it's important to note that while these sections cover the key aspects, they don't encompass every single detail. For an in-depth understanding of the capabilities offered by container profiles, we recommend referring to the following resources:
- Type definition for
NnfContainerProfile
- Sample for
NnfContainerProfile
- Online Examples for
NnfContainerProfile
(same as kubectl get
above)
"},{"location":"guides/user-containers/readme/#container-storages","title":"Container Storages","text":"The Storages
defined in the profile allow NNF filesystems to be made available inside of the container. These storages need to be referenced in the container workflow unless they are marked as optional.
There are three types of storages available to containers:
- local non-persistent storage (created via
#DW jobdw
directives) - persistent storage (created via
#DW create_persistent
directives) - global lustre storage (defined by
LustreFilesystems
)
For local and persistent storage, only GFS2 and Lustre filesystems are supported. Raw and XFS filesystems cannot be mounted more than once, so they cannot be mounted inside of a container while also being mounted on the NNF node itself.
For each storage in the profile, the name must follow these patterns (depending on the storage type):
DW_JOB_<storage_name>
DW_PERSISTENT_<storage_name>
DW_GLOBAL_<storage_name>
<storage_name>
is provided by the user and needs to be a name compatible with Linux environment variables (so underscores must be used, not dashes), since the storage mount directories are provided to the container via environment variables.
This storage name is used in container workflow directives to reference the NNF storage name that defines the filesystem. Find more info on that in Creating a Container Workflow.
Storages may be deemed as optional
in a profile. If a storage is not optional, the storage name must be set to the name of an NNF filesystem name in the container workflow.
For global lustre, there is an additional field for pvcMode
, which must match the mode that is configured in the LustreFilesystem
resource that represents the global lustre filesystem. This defaults to ReadWriteMany
.
Example:
storages:\n - name: DW_JOB_foo_local_storage\n optional: false\n - name: DW_PERSISTENT_foo_persistent_storage\n optional: true\n - name: DW_GLOBAL_foo_global_lustre\n optional: true\n pvcMode: ReadWriteMany\n
"},{"location":"guides/user-containers/readme/#container-spec","title":"Container Spec","text":"As mentioned earlier, container workflows can be categorized into two types: MPI and Non-MPI. It's essential to choose and define only one of these types within the container profile. Regardless of the type chosen, the data structure that implements the specification is equipped with two \"standard\" resources that are distinct from NNF custom resources.
For Non-MPI containers, the specification utilizes the spec
resource. This is the standard Kubernetes PodSpec
that outlines the desired configuration for the pod.
For MPI containers, mpiSpec
is used. This custom resource, available through MPIJobSpec
from mpi-operator
, serves as a facilitator for executing MPI applications across worker containers. This resource can be likened to a wrapper around a PodSpec
, but users need to define a PodSpec
for both Launcher and Worker containers.
See the MPIJobSpec
definition for more details on what can be configured for an MPI application.
It's important to bear in mind that the NNF Software is designed to override specific values within the MPIJobSpec
for ensuring the desired behavior in line with NNF software requirements. To prevent complications, it's advisable not to delve too deeply into the specification. A few illustrative examples of fields that are overridden by the NNF Software include:
- Replicas
- RunPolicy.BackoffLimit
- Worker/Launcher.RestartPolicy
- SSHAuthMountPath
By keeping these considerations in mind and refraining from extensive alterations to the specification, you can ensure a smoother integration with the NNF Software and mitigate any potential issues that may arise.
Please see the Sample and Examples listed above for more detail on container Specs.
"},{"location":"guides/user-containers/readme/#container-ports","title":"Container Ports","text":"Container Profiles allow for ports to be reserved for a container workflow. numPorts
can be used to specify the number of ports needed for a container workflow. The ports are opened on each targeted NNF node and are accessible outside of the cluster. Users must know how to contact the specific NNF node. It is recommend that DNS entries are made for this purpose.
In the workflow, the allocated port numbers are made available via the NNF_CONTAINER_PORTS
environment variable.
The workflow requests this number of ports from the NnfPortManager
, which is responsible for managing the ports allocated to container workflows. This resource can be inspected to see which ports are allocated.
Once a port is assigned to a workflow, that port number becomes unavailable for use by any other workflow until it is released.
Note
The SystemConfiguration
must be configured to allow for a range of ports, otherwise container workflows will fail in the Setup
state due to insufficient resources. See SystemConfiguration Setup.
"},{"location":"guides/user-containers/readme/#systemconfiguration-setup","title":"SystemConfiguration Setup","text":"In order for container workflows to request ports from the NnfPortManager
, the SystemConfiguration
must be configured for a range of ports:
kind: SystemConfiguration\nmetadata:\n name: default\n namespace: default\nspec:\n # Ports is the list of ports available for communication between nodes in the\n # system. Valid values are single integers, or a range of values of the form\n # \"START-END\" where START is an integer value that represents the start of a\n # port range and END is an integer value that represents the end of the port\n # range (inclusive).\n ports:\n - 4000-4999\n # PortsCooldownInSeconds is the number of seconds to wait before a port can be\n # reused. Defaults to 60 seconds (to match the typical value for the kernel's\n # TIME_WAIT). A value of 0 means the ports can be reused immediately.\n # Defaults to 60s if not set.\n portsCooldownInSeconds: 60\n
ports
is empty by default, and must be set by an administrator.
Multiple port ranges can be specified in this list, as well as single integers. This must be a safe port range that does not interfere with the ephemeral port range of the Linux kernel. The range should also account for the estimated number of simultaneous users that are running container workflows.
Once a container workflow is done, the port is released and the NnfPortManager
will not allow reuse of the port until the amount of time specified by portsCooldownInSeconds
has elapsed. Then the port can be reused by another container workflow.
"},{"location":"guides/user-containers/readme/#restricting-to-user-id-or-group-id","title":"Restricting To User ID or Group ID","text":"New NnfContainerProfile resources may be restricted to a specific user ID or group ID . When a data.userID
or data.groupID
is specified in the profile, only those Workflow resources having a matching user ID or group ID will be allowed to use that profile . If the profile specifies both of these IDs, then the Workflow resource must match both of them.
"},{"location":"guides/user-containers/readme/#creating-a-container-workflow","title":"Creating a Container Workflow","text":"The user's workflow will specify the name of the NnfContainerProfile
in a DW directive. If the custom profile is named red-rock-slushy
then it will be specified in the #DW container
directive with the profile
parameter.
#DW container profile=red-rock-slushy [...]\n
Furthermore, to set the container storages for the workflow, storage parameters must also be supplied in the workflow. This is done using the <storage_name>
(see Container Storages) and setting it to the name of a storage directive that defines an NNF filesystem. That storage directive must already exist as part of another workflow (e.g. persistent storage) or it can be supplied in the same workflow as the container. For global lustre, the LustreFilesystem
must exist that represents the global lustre filesystem.
In this example, we're creating a GFS2 filesystem to accompany the container directive. We're using the red-rock-slushy
profile which contains a non-optional storage called DW_JOB_local_storage
:
kind: NnfContainerProfile\nmetadata:\n name: red-rock-slushy\ndata:\n storages:\n - name: DW_JOB_local_storage\n optional: false\n template:\n mpiSpec:\n ...\n
The resulting container directive looks like this:
#DW jobdw name=my-gfs2 type=gfs2 capacity=100GB\"\n#DW container name=my-container profile=red-rock-slushy DW_JOB_local_storage=my-gfs2\n
Once the workflow progresses, this will create a 100GB GFS2 filesystem that is then mounted into the container upon creation. An environment variable called DW_JOB_local_storage
is made available inside of the container and provides the path to the mounted NNF GFS2 filesystem. An application running inside of the container can then use this variable to get to the filesystem mount directory. See here.
Multiple storages can be defined in the container directives. Only one container directive is allowed per workflow.
Note
GFS2 filesystems have special considerations since the mount directory contains directories for every compute node. See GFS2 Index Mounts for more info.
"},{"location":"guides/user-containers/readme/#targeting-nodes","title":"Targeting Nodes","text":"For container directives, compute nodes must be assigned to the workflow. The NNF software will trace the compute nodes back to their local NNF nodes and the containers will be executed on those NNF nodes. The act of assigning compute nodes to your container workflow instructs the NNF software to select the NNF nodes that run the containers.
For the jobdw
directive that is included above, the servers (i.e. NNF nodes) must also be assigned along with the computes.
"},{"location":"guides/user-containers/readme/#running-a-container-workflow","title":"Running a Container Workflow","text":"Once the workflow is created, the WLM progresses it through the following states. This is a quick overview of the container-related behavior that occurs:
- Proposal: Verify storages are provided according to the container profile.
- Setup: If applicable, request ports from NnfPortManager.
- DataIn: No container related activity.
- PreRun: Appropriate
MPIJob
or Job(s)
are created for the workflow. In turn, user containers are created and launched by Kubernetes. Containers are expected to start in this state. - PostRun: Once in PostRun, user containers are expected to complete (non-zero exit) successfully.
- DataOut: No container related activity.
- Teardown: Ports are released;
MPIJob
or Job(s)
are deleted, which in turn deletes the user containers.
The two main states of a container workflow (i.e. PreRun, PostRun) are discussed further in the following sections.
"},{"location":"guides/user-containers/readme/#prerun","title":"PreRun","text":"In PreRun, the containers are created and expected to start. Once the containers reach a non-initialization state (i.e. Running), the containers are considered to be started and the workflow can advance.
By default, containers are expected to start within 60 seconds. If not, the workflow reports an Error that the containers cannot be started. This value is configurable via the preRunTimeoutSeconds
field in the container profile.
To summarize the PreRun behavior:
- If the container starts successfully (running), transition to
Completed
status. - If the container fails to start, transition to the
Error
status. - If the container is initializing and has not started after
preRunTimeoutSeconds
seconds, terminate the container and transition to the Error
status.
"},{"location":"guides/user-containers/readme/#init-containers","title":"Init Containers","text":"The NNF Software injects Init Containers into the container specification to perform initialization tasks. These containers must run to completion before the main container can start.
These initialization tasks include:
- Ensuring the proper permissions (i.e. UID/GID) are available in the main container
- For MPI jobs, ensuring the launcher pod can contact each worker pod via DNS
"},{"location":"guides/user-containers/readme/#prerun-completed","title":"PreRun Completed","text":"Once PreRun has transitioned to Completed
status, the user container is now running and the WLM should initiate applications on the compute nodes. Utilizing container ports, the applications on the compute nodes can establish communication with the user containers, which are running on the local NNF node attached to the computes.
This communication allows for the compute node applications to drive certain behavior inside of the user container. For example, once the compute node application is complete, it can signal to the user container that it is time to perform cleanup or data migration action.
"},{"location":"guides/user-containers/readme/#postrun","title":"PostRun","text":"In PostRun, the containers are expected to exit cleanly with a zero exit code. If a container fails to exit cleanly, the Kubernetes software attempts a number of retries based on the configuration of the container profile. It continues to do this until the container exits successfully, or until the retryLimit
is hit - whichever occurs first. In the latter case, the workflow reports an Error.
Read up on the Failure Retries for more information on retries.
Furthermore, the container profile features a postRunTimeoutSeconds
field. If this timeout is reached before the container successfully exits, it triggers an Error
status. The timer for this timeout begins upon entry into the PostRun phase, allowing the containers the specified period to execute before the workflow enters an Error
status.
To recap the PostRun behavior:
- If the container exits successfully, transition to
Completed
status. - If the container exits unsuccessfully after
retryLimit
number of retries, transition to the Error
status. - If the container is running and has not exited after
postRunTimeoutSeconds
seconds, terminate the container and transition to the Error
status.
"},{"location":"guides/user-containers/readme/#failure-retries","title":"Failure Retries","text":"If a container fails (non-zero exit code), the Kubernetes software implements retries. The number of retries can be set via the retryLimit
field in the container profile. If a non-zero exit code is detected, the Kubernetes software creates a new instance of the pod and retries. The default number of retries for retryLimit
is set to 6, which is the default value for Kubernetes Jobs. This means that if the pods fails every single time, there will be 7 failed pods in total since it attempted 6 retries after the first failure.
To understand this behavior more, see Pod backoff failure policy in the Kubernetes documentation. This explains the retry (i.e. backoff) behavior in more detail.
It is important to note that due to the configuration of the MPIJob
and/or Job
that is created for User Containers, the container retries are immediate - there is no backoff timeout between retires. This is due to the NNF Software setting the RestartPolicy
to Never
, which causes a new pod to spin up after every failure rather than re-use (i.e. restart) the previously failed pod. This allows a user to see a complete history of the failed pod(s) and the logs can easily be obtained. See more on this at Handling Pod and container failures in the Kubernetes documentation.
"},{"location":"guides/user-containers/readme/#putting-it-all-together","title":"Putting it All Together","text":"See the NNF Container Example for a working example of how to run a simple MPI application inside of an NNF User Container and run it through a Container Workflow.
"},{"location":"guides/user-containers/readme/#reference","title":"Reference","text":""},{"location":"guides/user-containers/readme/#environment-variables","title":"Environment Variables","text":"Two sets of environment variables are available with container workflows: Container and Compute Node. The former are the variables that are available inside the user containers. The latter are the variables that are provided back to the DWS workflow, which in turn are collected by the WLM and provided to compute nodes. See the WLM documentation for more details.
"},{"location":"guides/user-containers/readme/#container-environment-variables","title":"Container Environment Variables","text":"These variables are provided for use inside the container. They can be used as part of the container command in the NNF Container Profile or within the container itself.
"},{"location":"guides/user-containers/readme/#storages","title":"Storages","text":"Each storage defined by a container profile and used in a container workflow results in a corresponding environment variable. This variable is used to hold the mount directory of the filesystem.
"},{"location":"guides/user-containers/readme/#gfs2-index-mounts","title":"GFS2 Index Mounts","text":"When using a GFS2 file system, each compute is allocated its own NNF volume. The NNF software mounts a collection of directories that are indexed (e.g. 0/
, 1/
, etc) to the compute nodes.
Application authors must be aware that their desired GFS2 mount-point really a collection of directories, one for each compute node. It is the responsibility of the author to understand the underlying filesystem mounted at the storage environment variable (e.g. $DW_JOB_my_gfs2_storage
).
Each compute node's application can leave breadcrumbs (e.g. hostnames) somewhere on the GFS2 filesystem mounted on the compute node. This can be used to identify the index mount directory to a compute node from the application running inside of the user container.
Here is an example of 3 compute nodes on an NNF node targeted in a GFS2 workflow:
$ ls $DW_JOB_my_gfs2_storage/*\n/mnt/nnf/3e92c060-ca0e-4ddb-905b-3d24137cbff4-0/0\n/mnt/nnf/3e92c060-ca0e-4ddb-905b-3d24137cbff4-0/1\n/mnt/nnf/3e92c060-ca0e-4ddb-905b-3d24137cbff4-0/2\n
Node positions are not absolute locations. The WLM could, in theory, select 6 physical compute nodes at physical location 1, 2, 3, 5, 8, 13, which would appear as directories /0
through /5
in the container mount path.
Additionally, not all container instances could see the same number of compute nodes in an indexed-mount scenario. If 17 compute nodes are required for the job, WLM may assign 16 nodes to run one NNF node, and 1 node to another NNF. The first NNF node would have 16 index directories, whereas the 2nd would only contain 1.
"},{"location":"guides/user-containers/readme/#hostnames-and-domains","title":"Hostnames and Domains","text":"Containers can contact one another via Kubernetes cluster networking. This functionality is provided by DNS. Environment variables are provided that allow a user to be able to piece together the FQDN so that the other containers can be contacted.
This example demonstrates an MPI container workflow, with two worker pods. Two worker pods means two pods/containers running on two NNF nodes.
"},{"location":"guides/user-containers/readme/#ports","title":"Ports","text":"See the NNF_CONTAINER_PORTS
section under Compute Node Environment Variables.
mpiuser@my-container-workflow-launcher:~$ env | grep NNF\nNNF_CONTAINER_HOSTNAMES=my-container-workflow-launcher my-container-workflow-worker-0 my-container-workflow-worker-1\nNNF_CONTAINER_DOMAIN=default.svc.cluster.local\nNNF_CONTAINER_SUBDOMAIN=my-container-workflow-worker\n
The container FQDN consists of the following: <HOSTNAME>.<SUBDOMAIN>.<DOMAIN>
. To contact the other worker container from worker 0, my-container-workflow-worker-1.my-container-workflow-worker.default.svc.cluster.local
would be used.
For MPI-based containers, an alternate way to retrieve this information is to look at the default hostfile
, provided by mpi-operator
. This file lists out all the worker nodes' FQDNs:
mpiuser@my-container-workflow-launcher:~$ cat /etc/mpi/hostfile\nmy-container-workflow-worker-0.my-container-workflow-worker.default.svc slots=1\nmy-container-workflow-worker-1.my-container-workflow-worker.default.svc slots=1\n
"},{"location":"guides/user-containers/readme/#compute-node-environment-variables","title":"Compute Node Environment Variables","text":"These environment variables are provided to the compute node via the WLM by way of the DWS Workflow. Note that these environment variables are consistent across all the compute nodes for a given workflow.
Note
It's important to note that the variables presented here pertain exclusively to User Container-related variables. This list does not encompass the entirety of NNF environment variables accessible to the compute node through the Workload Manager (WLM)
"},{"location":"guides/user-containers/readme/#nnf_container_ports","title":"NNF_CONTAINER_PORTS
","text":"If the NNF Container Profile requests container ports, then this environment variable provides the allocated ports for the container. This is a comma separated list of ports if multiple ports are requested.
This allows an application on the compute node to contact the user container running on its local NNF node via these port numbers. The compute node must have proper routing to the NNF Node and needs a generic way of contacting the NNF node. It is suggested than a DNS entry is provided via /etc/hosts
, or similar.
For cases where one port is requested, the following can be used to contact the user container running on the NNF node (assuming a DNS entry for local-rabbit
is provided via /etc/hosts
).
local-rabbit:$(NNF_CONTAINER_PORTS)\n
"},{"location":"guides/user-containers/readme/#creating-images","title":"Creating Images","text":"For details, refer to the NNF Container Example Readme. However, in broad terms, an image that is capable of supporting MPI necessitates the following components:
- User Application: Your specific application
- Open MPI: Incorporate Open MPI to facilitate MPI operations
- SSH Server: Including an SSH server to enable communication
- nslookup: To validate Launcher/Worker container communication over the network
By ensuring the presence of these components, users can create an image that supports MPI operations on the NNF platform.
The nnf-mfu image serves as a suitable base image, encompassing all the essential components required for this purpose.
"},{"location":"guides/user-containers/readme/#using-a-private-container-repository","title":"Using a Private Container Repository","text":"The user's containerized application may be placed in a private repository . In this case, the user must define an access token to be used with that repository, and that token must be made available to the Rabbit's Kubernetes environment so that it can pull that container from the private repository.
See Pull an Image from a Private Registry in the Kubernetes documentation for more information.
"},{"location":"guides/user-containers/readme/#about-the-example","title":"About the Example","text":"Each container registry will have its own way of letting its users create tokens to be used with their repositories . Docker Hub will be used for the private repository in this example, and the user's account on Docker Hub will be \"dean\".
"},{"location":"guides/user-containers/readme/#preparing-the-private-repository","title":"Preparing the Private Repository","text":"The user's application container is named \"red-rock-slushy\" . To store this container on Docker Hub the user must log into docker.com with their browser and click the \"Create repository\" button to create a repository named \"red-rock-slushy\", and the user must check the box that marks the repository as private . The repository's name will be displayed as \"dean/red-rock-slushy\" with a lock icon to show that it is private.
"},{"location":"guides/user-containers/readme/#create-and-push-a-container","title":"Create and Push a Container","text":"The user will create their container image in the usual ways, naming it for their private repository and tagging it according to its release.
Prior to pushing images to the repository, the user must complete a one-time login to the Docker registry using the docker command-line tool.
docker login -u dean\n
After completing the login, the user may then push their images to the repository.
docker push dean/red-rock-slushy:v1.0\n
"},{"location":"guides/user-containers/readme/#generate-a-read-only-token","title":"Generate a Read-Only Token","text":"A read-only token must be generated to allow Kubernetes to pull that container image from the private repository, because Kubernetes will not be running as that user . This token must be given to the administrator, who will use it to create a Kubernetes secret.
To log in and generate a read-only token to share with the administrator, the user must follow these steps:
- Visit docker.com and log in using their browser.
- Click on the username in the upper right corner.
- Select \"Account Settings\" and navigate to \"Security\".
- Click the \"New Access Token\" button to create a read-only token.
- Keep a copy of the generated token to share with the administrator.
"},{"location":"guides/user-containers/readme/#store-the-read-only-token-as-a-kubernetes-secret","title":"Store the Read-Only Token as a Kubernetes Secret","text":"The administrator must store the user's read-only token as a kubernetes secret . The secret must be placed in the default
namespace, which is the same namespace where the user containers will be run . The secret must include the user's Docker Hub username and the email address they have associated with that username . In this case, the secret will be named readonly-red-rock-slushy
.
USER_TOKEN=users-token-text\nUSER_NAME=dean\nUSER_EMAIL=dean@myco.com\nSECRET_NAME=readonly-red-rock-slushy\nkubectl create secret docker-registry $SECRET_NAME -n default --docker-server=\"https://index.docker.io/v1/\" --docker-username=$USER_NAME --docker-password=$USER_TOKEN --docker-email=$USER_EMAIL\n
"},{"location":"guides/user-containers/readme/#add-the-secret-to-the-nnfcontainerprofile","title":"Add the Secret to the NnfContainerProfile","text":"The administrator must add an imagePullSecrets
list to the NnfContainerProfile resource that was created for this user's containerized application.
The following profile shows the placement of the readonly-red-rock-slushy
secret which was created in the previous step, and points to the user's dean/red-rock-slushy:v1.0
container.
apiVersion: nnf.cray.hpe.com/v1alpha1\nkind: NnfContainerProfile\nmetadata:\n name: red-rock-slushy\n namespace: nnf-system\ndata:\n pinned: false\n retryLimit: 6\n spec:\n imagePullSecrets:\n - name: readonly-red-rock-slushy\n containers:\n - command:\n - /users-application\n image: dean/red-rock-slushy:v1.0\n name: red-rock-app\n storages:\n - name: DW_JOB_foo_local_storage\n optional: false\n - name: DW_PERSISTENT_foo_persistent_storage\n optional: true\n
Now any user can select this profile in their Workflow by specifying it in a #DW container
directive.
#DW container profile=red-rock-slushy [...]\n
"},{"location":"guides/user-containers/readme/#using-a-private-container-repository-for-mpi-application-containers","title":"Using a Private Container Repository for MPI Application Containers","text":"If our user's containerized application instead contains an MPI application, because perhaps it's a private copy of nnf-mfu, then the administrator would insert two imagePullSecrets
lists into the mpiSpec
of the NnfContainerProfile for the MPI launcher and the MPI worker.
apiVersion: nnf.cray.hpe.com/v1alpha1\nkind: NnfContainerProfile\nmetadata:\n name: mpi-red-rock-slushy\n namespace: nnf-system\ndata:\n mpiSpec:\n mpiImplementation: OpenMPI\n mpiReplicaSpecs:\n Launcher:\n template:\n spec:\n imagePullSecrets:\n - name: readonly-red-rock-slushy\n containers:\n - command:\n - mpirun\n - dcmp\n - $(DW_JOB_foo_local_storage)/0\n - $(DW_JOB_foo_local_storage)/1\n image: dean/red-rock-slushy:v2.0\n name: red-rock-launcher\n Worker:\n template:\n spec:\n imagePullSecrets:\n - name: readonly-red-rock-slushy\n containers:\n - image: dean/red-rock-slushy:v2.0\n name: red-rock-worker\n runPolicy:\n cleanPodPolicy: Running\n suspend: false\n slotsPerWorker: 1\n sshAuthMountPath: /root/.ssh\n pinned: false\n retryLimit: 6\n storages:\n - name: DW_JOB_foo_local_storage\n optional: false\n - name: DW_PERSISTENT_foo_persistent_storage\n optional: true\n
Now any user can select this profile in their Workflow by specifying it in a #DW container
directive.
#DW container profile=mpi-red-rock-slushy [...]\n
"},{"location":"guides/user-interactions/readme/","title":"Rabbit User Interactions","text":""},{"location":"guides/user-interactions/readme/#overview","title":"Overview","text":"A user may include one or more Data Workflow directives in their job script to request Rabbit services. Directives take the form #DW [command] [command args]
, and are passed from the workload manager to the Rabbit software for processing. The directives can be used to allocate Rabbit file systems, copy files, and run user containers on the Rabbit nodes.
Once the job is running on compute nodes, the application can find access to Rabbit specific resources through a set of environment variables that provide mount and network access information.
"},{"location":"guides/user-interactions/readme/#commands","title":"Commands","text":""},{"location":"guides/user-interactions/readme/#jobdw","title":"jobdw","text":"The jobdw
directive command tells the Rabbit software to create a file system on the Rabbit hardware for the lifetime of the user's job. At the end of the job, any data that is not moved off of the file system either by the application or through a copy_out
directive will be lost. Multiple jobdw
directives can be listed in the same job script.
"},{"location":"guides/user-interactions/readme/#command-arguments","title":"Command Arguments","text":"Argument Required Value Notes type
Yes raw
, xfs
, gfs2
, lustre
Type defines how the storage should be formatted. For Lustre file systems, a single file system is created that is mounted by all computes in the job. For raw, xfs, and GFS2 storage, a separate file system is allocated for each compute node. capacity
Yes Allocation size with units. 1TiB
, 100GB
, etc. Capacity interpretation varies by storage type. For Lustre file systems, capacity is the aggregate OST capacity. For raw, xfs, and GFS2 storage, capacity is the capacity of the file system for a single compute node. Capacity suffixes are: KB
, KiB
, MB
, MiB
, GB
, GiB
, TB
, TiB
name
Yes String including numbers and '-' This is a name for the storage allocation that is unique within a job profile
No Profile name This specifies which profile to use when allocating storage. Profiles include mkfs
and mount
arguments, file system layout, and many other options. Profiles are created by admins. When no profile is specified, the default profile is used. More information about storage profiles can be found in the Storage Profiles guide. requires
No copy-offload
Using this option results in the copy offload daemon running on the compute nodes. This is for users that want to initiate data movement to or from the Rabbit storage from within their application. See the Required Daemons section of the Directive Breakdown guide for a description of how the user may request the daemon, in the case where the WLM will run it only on demand."},{"location":"guides/user-interactions/readme/#examples","title":"Examples","text":"#DW jobdw type=xfs capacity=10GiB name=scratch\n
This directive results in a 10GiB xfs file system created for each compute node in the job using the default storage profile.
#DW jobdw type=lustre capacity=1TB name=dw-temp profile=high-metadata\n
This directive results in a single 1TB Lustre file system being created that can be accessed from all the compute nodes in the job. It is using a storage profile that an admin created to give high Lustre metadata performance.
#DW jobdw type=gfs2 capacity=50GB name=checkpoint requires=copy-offload\n
This directive results in a 50GB GFS2 file system created for each compute node in the job using the default storage profile. The copy-offload daemon is started on the compute node to allow the application to request the Rabbit to move data from the GFS2 file system to another file system while the application is running using the Copy Offload API.
"},{"location":"guides/user-interactions/readme/#create_persistent","title":"create_persistent","text":"The create_persistent
command results in a storage allocation on the Rabbit nodes that lasts beyond the lifetime of the job. This is useful for creating a file system that can share data between jobs. Only a single create_persistent
directive is allowed in a job, and it cannot be in the same job as a destroy_persistent
directive. See persistentdw to utilize the storage in a job.
"},{"location":"guides/user-interactions/readme/#command-arguments_1","title":"Command Arguments","text":"Argument Required Value Notes type
Yes raw
, xfs
, gfs2
, lustre
Type defines how the storage should be formatted. For Lustre file systems, a single file system is created. For raw, xfs, and GFS2 storage, a separate file system is allocated for each compute node in the job. capacity
Yes Allocation size with units. 1TiB
, 100GB
, etc. Capacity interpretation varies by storage type. For Lustre file systems, capacity is the aggregate OST capacity. For raw, xfs, and GFS2 storage, capacity is the capacity of the file system for a single compute node. Capacity suffixes are: KB
, KiB
, MB
, MiB
, GB
, GiB
, TB
, TiB
name
Yes Lowercase string including numbers and '-' This is a name for the storage allocation that is unique within the system profile
No Profile name This specifies which profile to use when allocating storage. Profiles include mkfs
and mount
arguments, file system layout, and many other options. Profiles are created by admins. When no profile is specified, the default profile is used. The profile used when creating the persistent storage allocation is the same profile used by jobs that use the persistent storage. More information about storage profiles can be found in the Storage Profiles guide."},{"location":"guides/user-interactions/readme/#examples_1","title":"Examples","text":"#DW create_persistent type=xfs capacity=100GiB name=scratch\n
This directive results in a 100GiB xfs file system created for each compute node in the job using the default storage profile. Since xfs file systems are not network accessible, subsequent jobs that want to use the file system must have the same number of compute nodes, and be scheduled on compute nodes with access to the correct Rabbit nodes. This means the job with the create_persistent
directive must schedule the desired number of compute nodes even if no application is run on the compute nodes as part of the job.
#DW create_persistent type=lustre capacity=10TiB name=shared-data profile=read-only\n
This directive results in a single 10TiB Lustre file system being created that can be accessed later by any compute nodes in the system. Multiple jobs can access a Rabbit Lustre file system at the same time. This job can be scheduled with a single compute node (or zero compute nodes if the WLM allows), without any limitations on compute node counts for subsequent jobs using the persistent Lustre file system.
"},{"location":"guides/user-interactions/readme/#destroy_persistent","title":"destroy_persistent","text":"The destroy_persistent
command will delete persistent storage that was allocated by a corresponding create_persistent
. If the persistent storage is currently in use by a job, then the job containing the destroy_persistent
command will fail. Only a single destroy_persistent
directive is allowed in a job, and it cannot be in the same job as a create_persistent
directive.
"},{"location":"guides/user-interactions/readme/#command-arguments_2","title":"Command Arguments","text":"Argument Required Value Notes name
Yes Lowercase string including numbers and '-' This is a name for the persistent storage allocation that will be destroyed"},{"location":"guides/user-interactions/readme/#examples_2","title":"Examples","text":"#DW destroy_persistent name=shared-data\n
This directive will delete the persistent storage allocation with the name shared-data
"},{"location":"guides/user-interactions/readme/#persistentdw","title":"persistentdw","text":"The persistentdw
command makes an existing persistent storage allocation available to a job. The persistent storage must already be created from a create_persistent
command in a different job script. Multiple persistentdw
commands can be used in the same job script to request access to multiple persistent allocations.
Persistent Lustre file systems can be accessed from any compute nodes in the system, and the compute node count for the job can vary as needed. Multiple jobs can access a persistent Lustre file system concurrently if desired. Raw, xfs, and GFS2 file systems can only be accessed by compute nodes that have a physical connection to the Rabbits hosting the storage, and jobs accessing these storage types must have the same compute node count as the job that made the persistent storage.
"},{"location":"guides/user-interactions/readme/#command-arguments_3","title":"Command Arguments","text":"Argument Required Value Notes name
Yes Lowercase string including numbers and '-' This is a name for the persistent storage that will be accessed requires
No copy-offload
Using this option results in the copy offload daemon running on the compute nodes. This is for users that want to initiate data movement to or from the Rabbit storage from within their application. See the Required Daemons section of the Directive Breakdown guide for a description of how the user may request the daemon, in the case where the WLM will run it only on demand."},{"location":"guides/user-interactions/readme/#examples_3","title":"Examples","text":"#DW persistentdw name=shared-data requires=copy-offload\n
This directive will cause the shared-data
persistent storage allocation to be mounted onto the compute nodes for the job application to use. The copy-offload daemon will be started on the compute nodes so the application can request data movement during the application run.
"},{"location":"guides/user-interactions/readme/#copy_incopy_out","title":"copy_in/copy_out","text":"The copy_in
and copy_out
directives are used to move data to and from the storage allocations on Rabbit nodes. The copy_in
directive requests that data be moved into the Rabbit file system before application launch, and the copy_out
directive requests data to be moved off of the Rabbit file system after application exit. This is different from data-movement that is requested through the copy-offload API, which occurs during application runtime. Multiple copy_in
and copy_out
directives can be included in the same job script. More information about data movement can be found in the Data Movement documentation.
"},{"location":"guides/user-interactions/readme/#command-arguments_4","title":"Command Arguments","text":"Argument Required Value Notes source
Yes [path]
, $DW_JOB_[name]/[path]
, $DW_PERSISTENT_[name]/[path]
[name]
is the name of the Rabbit persistent or job storage as specified in the name
argument of the jobdw
or persistentdw
directive. Any '-'
in the name from the jobdw
or persistentdw
directive should be changed to a '_'
in the copy_in
and copy_out
directive. destination
Yes [path]
, $DW_JOB_[name]/[path]
, $DW_PERSISTENT_[name]/[path]
[name]
is the name of the Rabbit persistent or job storage as specified in the name
argument of the jobdw
or persistentdw
directive. Any '-'
in the name from the jobdw
or persistentdw
directive should be changed to a '_'
in the copy_in
and copy_out
directive. profile
No Profile name This specifies which profile to use when copying data. Profiles specify the copy command to use, MPI arguments, and how output gets logged. If no profile is specified then the default profile is used. Profiles are created by an admin."},{"location":"guides/user-interactions/readme/#examples_4","title":"Examples","text":"#DW jobdw type=xfs capacity=10GiB name=fast-storage\n#DW copy_in source=/lus/backup/johndoe/important_data destination=$DW_JOB_fast_storage/data\n
This set of directives creates an xfs file system on the Rabbits for each compute node in the job, and then moves data from /lus/backup/johndoe/important_data
to each of the xfs file systems. /lus/backup
must be set up in the Rabbit software as a Global Lustre file system by an admin. The copy takes place before the application is launched on the compute nodes.
#DW persistentdw name=shared-data1\n#DW persistentdw name=shared-data2\n\n#DW copy_out source=$DW_PERSISTENT_shared_data1/a destination=$DW_PERSISTENT_shared_data2/a profile=no-xattr\n#DW copy_out source=$DW_PERSISTENT_shared_data1/b destination=$DW_PERSISTENT_shared_data2/b profile=no-xattr\n
This set of directives copies two directories from one persistent storage allocation to another persistent storage allocation using the no-xattr
profile to avoid copying xattrs. This data movement occurs after the job application exits on the compute nodes, and the two copies do not occur in a deterministic order.
#DW persistentdw name=shared-data\n#DW jobdw type=lustre capacity=1TiB name=fast-storage profile=high-metadata\n\n#DW copy_in source=/lus/shared/johndoe/shared-libraries destination=$DW_JOB_fast_storage/libraries\n#DW copy_in source=$DW_PERSISTENT_shared_data/ destination=$DW_JOB_fast_storage/data\n\n#DW copy_out source=$DW_JOB_fast_storage/data destination=/lus/backup/johndoe/very_important_data profile=no-xattr\n
This set of directives makes use of a persistent storage allocation and a job storage allocation. There are two copy_in
directives, one that copies data from the global lustre file system to the job allocation, and another that copies data from the persistent allocation to the job allocation. These copies do not occur in a deterministic order. The copy_out
directive occurs after the application has exited, and copies data from the Rabbit job storage to a global lustre file system.
"},{"location":"guides/user-interactions/readme/#container","title":"container","text":"The container
directive is used to launch user containers on the Rabbit nodes. The containers have access to jobdw
, persistentdw
, or global Lustre storage as specified in the container
directive. More documentation for user containers can be found in the User Containers guide. Only a single container
directive is allowed in a job.
"},{"location":"guides/user-interactions/readme/#command-arguments_5","title":"Command Arguments","text":"Argument Required Value Notes name
Yes Lowercase string including numbers and '-' This is a name for the container instance that is unique within a job profile
Yes Profile name This specifies which container profile to use. The container profile contains information about which container to run, which file system types to expect, which network ports are needed, and many other options. An admin is responsible for creating the container profiles. DW_JOB_[expected]
No jobdw
storage allocation name
The container profile will list jobdw
file systems that the container requires. [expected]
is the name as specified in the container profile DW_PERSISTENT_[expected]
No persistentdw
storage allocation name
The container profile will list persistentdw
file systems that the container requires. [expected]
is the name as specified in the container profile DW_GLOBAL_[expected]
No Global lustre path The container profile will list global Lustre file systems that the container requires. [expected]
is the name as specified in the container profile"},{"location":"guides/user-interactions/readme/#examples_5","title":"Examples","text":"#DW jobdw type=xfs capacity=10GiB name=fast-storage\n#DW container name=backup profile=automatic-backup DW_JOB_source=fast-storage DW_GLOBAL_destination=/lus/backup/johndoe\n
These directives create an xfs Rabbit job allocation and specify a container that should run on the Rabbit nodes. The container profile specified two file systems that the container needs, DW_JOB_source
and DW_GLOBAL_destination
. DW_JOB_source
requires a jobdw
file system and DW_GLOBAL_destination
requires a global Lustre file system.
"},{"location":"guides/user-interactions/readme/#environment-variables","title":"Environment Variables","text":"The WLM makes a set of environment variables available to the job application running on the compute nodes that provide Rabbit specific information. These environment variables are used to find the mount location of Rabbit file systems and port numbers for user containers.
Environment Variable Value Notes DW_JOB_[name]
Mount path of a jobdw
file system [name]
is from the name
argument in the jobdw
directive. Any '-'
characters in the name
will be converted to '_'
in the environment variable. There will be one of these environment variables per jobdw
directive in the job. DW_PERSISTENT_[name]
Mount path of a persistentdw
file system [name]
is from the name
argument in the persistentdw
directive. Any '-'
characters in the name
will be converted to '_'
in the environment variable. There will be one of these environment variables per persistentdw
directive in the job. NNF_CONTAINER_PORTS
Comma separated list of ports These ports are used together with the IP address of the local Rabbit to communicate with a user container specified by a container
directive. More information can be found in the User Containers guide."},{"location":"repo-guides/readme/","title":"Repo Guides","text":""},{"location":"repo-guides/readme/#management","title":"Management","text":""},{"location":"repo-guides/release-nnf-sw/readme/","title":"Releasing NNF Software","text":""},{"location":"repo-guides/release-nnf-sw/readme/#nnf-software-overview","title":"NNF Software Overview","text":"The following repositories comprise the NNF Software and each have their own versions. There is a hierarchy, since nnf-deploy
packages the individual components together using submodules.
Each component under nnf-deploy
needs to be released first, then nnf-deploy
can be updated to point to those release versions, then nnf-deploy
itself can be updated and released.
The documentation repo (NearNodeFlash/NearNodeFlash.github.io) is released separately and is not part of nnf-deploy
, but it should match the version number of nnf-deploy
. Release this like the other components.
nnf-ec is vendored in as part of nnf-sos
and does not need to be released separately.
"},{"location":"repo-guides/release-nnf-sw/readme/#primer","title":"Primer","text":"This document is based on the process set forth by the DataWorkflowServices Release Process. Please read that as a background for this document before going any further.
"},{"location":"repo-guides/release-nnf-sw/readme/#requirements","title":"Requirements","text":"To create tags and releases, you will need maintainer or admin rights on the repos.
"},{"location":"repo-guides/release-nnf-sw/readme/#release-each-component-in-nnf-deploy","title":"Release Each Component In nnf-deploy
","text":"You'll first need to create releases for each component contained in nnf-deploy
. This section describes that process.
Each release branch needs to be updated with what is on master. To do that, we'll need the latest copy of master, and it will ultimately be merged to the releases/v0
branch via a Pull Request. Once merged, an annotated tag is created and then a release.
Each component has its own version number that needs to be incremented. Make sure you change the version numbers in the commands below to match the new version for the component. The v0.0.3
is just an example.
-
Ensure your branches are up to date:
git checkout master\ngit pull\ngit checkout releases/v0\ngit pull\n
-
Create a branch to merge into the release branch:
git checkout -b release-v0.0.3\n
-
Merge in the updates from the master
branch. There should not be any conflicts, but it's not unheard of. Tread carefully if there are conflicts.
git merge master\n
-
Verify that there are no differences between your branch and the master branch:
git diff master\n
If there are any differences, they must be trivial. Some READMEs may have extra lines at the end.
-
Perform repo-specific updates:
- For
lustre-csi-driver
, lustre-fs-operator
, dws
, nnf-sos
, and nnf-dm
there are additional files that need to track the version number as well, which allow them to be installed with kubectl apply -k
.
Repo Update nnf-mfu
The new version of nnf-mfu
is referenced by the NNFMFU
variable in several places:nnf-sos
1. Makefile
replace NNFMFU
with nnf-mfu's
tag.nnf-dm
1. In Dockerfile
and Makefile
, replace NNFMFU_VERSION
with the new version.2. In config/manager/kustomization.yaml
, replace nnf-mfu
's newTag: <X.Y.Z>.
nnf-deploy
1. In config/repositories.yaml
replace NNFMFU_VERSION
with the new version. lustre-fs-operator
update config/manager/kustomization.yaml
with the correct version.nnf-deploy
1. In config/repositories.yaml
replace the lustre-fs-operator version. dws
update config/manager/kustomization.yaml
with the correct version. nnf-sos
update config/manager/kustomization.yaml
with the correct version. nnf-dm
update config/manager/kustomization.yaml
with the correct version. lustre-csi-driver
update deploy/kubernetes/base/kustomization.yaml
and charts/lustre-csi-driver/values.yaml
with the correct version.nnf-deploy
1. In config/repositories.yaml
replace the lustre-csi-driver version. -
Target the releases/v0
branch with a Pull Request from your branch. When merging the Pull Request, you must use a Merge Commit.
Note
Do not Rebase or Squash! Those actions remove the records that Git uses to determine which commits have been merged, and then when the next release is created Git will treat everything like a conflict. Additionally, this will cause auto-generated release notes to include the previous release.
-
Once merged, update the release branch locally and create an annotated tag. Each repo has a workflow job named create_release
that will create a release automatically when the new tag is pushed.
git checkout releases/v0\ngit pull\ngit tag -a v0.0.3 -m \"Release v0.0.3\"\ngit push origin --tags\n
-
GOTO Step 1 and repeat this process for each remaining component.
"},{"location":"repo-guides/release-nnf-sw/readme/#release-nnf-deploy","title":"Release nnf-deploy
","text":"Once the individual components are released, we need to update the submodules in nnf-deploy's
master
branch before we create the release branch. This ensures that everything is current on master
for nnf-deploy
.
-
Update the submodules for nnf-deploy
on master:
cd nnf-deploy\ngit checkout master\ngit pull\ngit submodule foreach git checkout master\ngit submodule foreach git pull\n
-
Create a branch to capture the submodule changes for the PR to master
git checkout -b update-submodules\n
-
Commit the changes and open a Pull Request against the master
branch.
-
Once merged, follow steps 1-3 from the previous section to create a release branch off of releases/v0
and update it with changes from master
.
-
There will be conflicts for the submodules after step 3. This is expected. Update the submodules to the new tags and then commit the changes. If each tag was committed properly, the following command can do this for you:
git submodule foreach 'git checkout `git describe --match=\"v*\" HEAD`'\n
-
Add each submodule to the commit with git add
.
-
Verify that each submodule is now at the proper tagged version.
git submodule\n
-
Update config/repositories.yaml
with the referenced versions for:
lustre-csi-driver
lustre-fs-operator
nnf-mfu
(Search for NNFMFU_VERSION)
-
Tidy and make nnf-deploy
to avoid embarrassment.
go mod tidy\nmake\n
-
Do another git add
for any changes, particularly go.mod
and/or go.sum
.
-
Verify that git status
is happy with nnf-deploy
and then finalize the merge from master by with a git commit
.
-
Follow steps 6-7 from the previous section to finalize the release of nnf-deploy
.
"},{"location":"repo-guides/release-nnf-sw/readme/#release-nearnodeflashgithubio","title":"Release NearNodeFlash.github.io
","text":"Please review and update the documentation for changes you may have made.
After nnf-deploy has a release tag, you may release the documentation. Use the same steps found above in \"Release Each Component\". Note that the default branch for this repo is \"main\" instead of \"master\".
Give this release a tag that matches the nnf-deploy release, to show that they go together. Create the release by using the \"Create release\" or \"Draft a new release\" button in the GUI, or by using the gh release create
CLI command. Whether using the GUI or the CLI, mark the release as \"latest\" and select the appropriate option to generate release notes.
Wait for the mike
tool in .github/workflow/release.yaml
to finish building the new doc. You can check its status by going to the gh-pages
branch in the repo. When you visit the release at https://nearnodeflash.github.io, you should see the new release in the drop-down menu and the new release should be the default display.
The software is now released!
"},{"location":"repo-guides/release-nnf-sw/readme/#clone-a-release","title":"Clone a release","text":"The follow commands clone release v0.0.7
into nnf-deploy-v0.0.7
export NNF_VERSION=v0.0.7\n\ngit clone --recurse-submodules git@github.com:NearNodeFlash/nnf-deploy nnf-deploy-$NNF_VERSION\ncd nnf-deploy-$NNF_VERSION\ngit -c advice.detachedHead=false checkout $NNF_VERSION --recurse-submodules\n\ngit submodule status\n
"},{"location":"rfcs/","title":"Request for Comment","text":" -
Rabbit Request For Comment Process - Published
-
Rabbit Storage For Containerized Applications - Published
"},{"location":"rfcs/0001/readme/","title":"Rabbit Request For Comment Process","text":"Rabbit software must be designed in close collaboration with our end-users. Part of this process involves open discussion in the form of Request For Comment (RFC) documents. The remainder of this document presents the RFC process for Rabbit.
"},{"location":"rfcs/0001/readme/#history-philosophy","title":"History & Philosophy","text":"NNF RFC documents are modeled after the long history of IETF RFC documents that describe the internet. The philosophy is captured best in RFC 3
The content of a [...] note may be any thought, suggestion, etc. related to the HOST software or other aspect of the network. Notes are encouraged to be timely rather than polished. Philosophical positions without examples or other specifics, specific suggestions or implementation techniques without introductory or background explication, and explicit questions without any attempted answers are all acceptable. The minimum length for a [...] note is one sentence.
These standards (or lack of them) are stated explicitly for two reasons. First, there is a tendency to view a written statement as ipso facto authoritative, and we hope to promote the exchange and discussion of considerably less than authoritative ideas. Second, there is a natural hesitancy to publish something unpolished, and we hope to ease this inhibition.
"},{"location":"rfcs/0001/readme/#when-to-create-an-rfc","title":"When to Create an RFC","text":"New features, improvements, and other tasks that need to source feedback from multiple sources are to be written as Request For Comment (RFC) documents.
"},{"location":"rfcs/0001/readme/#metadata","title":"Metadata","text":"At the start of each RFC, there must include a short metadata block that contains information useful for filtering and sorting existing documents. This markdown is not visible inside the document.
---\nauthors: John Doe <john.doe@company.com>, Jane Doe <jane.doe@company.com>\nstate: prediscussion|ideation|discussion|published|committed|abandoned\ndiscussion: (link to PR, if available)\n----\n
"},{"location":"rfcs/0001/readme/#creation","title":"Creation","text":"An RFC should be created at the next freely available 4-digit index the GitHub RFC folder. Create a folder for your RFC and write your RFC document as readme.md
using standard Markdown. Include additional documents or images in the folder if needed.
Add an entry to /docs/rfcs/index.md
Add an entry to /mkdocs.yml
in the nav[RFCs]
section
"},{"location":"rfcs/0001/readme/#push","title":"Push","text":"Push your changes to your RFC branch
git add --all\ngit commit -s -m \"[####]: Your Request For Comment Document\"\ngit push origin ####\n
"},{"location":"rfcs/0001/readme/#pull-request","title":"Pull Request","text":"Submit a PR for your branch. This will open your RFC to comments. Add those individuals who are interested in your RFC as reviewers.
"},{"location":"rfcs/0001/readme/#merge","title":"Merge","text":"Once consensus has been reached on your RFC, merge to main origin.
"},{"location":"rfcs/0002/readme/","title":"Rabbit storage for containerized applications","text":"Note
This RFC contains outdated information. For the most up-to-date details, please refer to the User Containers documentation.
For Rabbit to provide storage to a containerized application there needs to be some mechanism. The remainder of this RFC proposes that mechanism.
"},{"location":"rfcs/0002/readme/#actors","title":"Actors","text":"There are several actors involved:
- The AUTHOR of the containerized application
- The ADMINISTRATOR who works with the author to determine the application requirements for execution
- The USER who intends to use the application using the 'container' directive in their job specification
- The RABBIT software that interprets the #DWs and starts the container during execution of the job
There are multiple relationships between the actors:
- AUTHOR to ADMINISTRATOR: The author tells the administrator how their application is executed and the NNF storage requirements.
- Between the AUTHOR and USER: The application expects certain storage, and the #DW must meet those expectations.
- ADMINISTRATOR to RABBIT: Admin tells Rabbit how to run the containerized application with the required storage.
- Between USER and RABBIT: User provides the #DW container directive in the job specification. Rabbit validates and interprets the directive.
"},{"location":"rfcs/0002/readme/#proposal","title":"Proposal","text":"The proposal below outlines the high level behavior of running containers in a workflow:
- The AUTHOR writes their application expecting NNF Storage at specific locations. For each storage requirement, they define:
- a unique name for the storage which can be referenced in the 'container' directive
- the required mount path or mount path prefix
- other constraints or storage requirements (e.g. minimum capacity)
- The AUTHOR works with the ADMINISTRATOR to define:
- a unique name for the program to be referred by USER
- the pod template or MPI Job specification for executing their program
- the NNF storage requirements described above.
- The ADMINISTRATOR creates a corresponding NNF Container Profile Kubernetes custom resource with the necessary NNF storage requirements and pod specification as described by the AUTHOR
- The USER who desires to use the application works with the AUTHOR and the related NNF Container Profile to understand the storage requirements
- The USER submits a WLM job with the #DW container directive variables populated
- WLM runs the workflow and drives it through the following stages...
Proposal
: RABBIT validates the #DW container directive by comparing the supplied values to those listed in the NNF Container Profile. If the workflow fails to meet the requirements, the job fails PreRun
: RABBIT software: - duplicates the pod template specification from the Container Profile and patches the necessary Volumes and the config map. The spec is used as the basis for starting the necessary pods and containers
- creates a config map reflecting the storage requirements and any runtime parameters; this is provided to the container at the volume mount named
nnf-config
, if specified
- The containerized application(s) executes. The expected mounts are available per the requirements and celebration occurs. The pods continue to run until:
- a pod completes successfully (any failed pods will be retried)
- the max number of pod retries is hit (indicating failure on all retry attempts)
- Note: retry limit is non-optional per Kubernetes configuration
- If retries are not desired, this number could be set to 0 to disable any retry attempts
PostRun
: RABBIT software: - marks the stage as
Ready
if the pods have all completed successfully. This includes a successful retry after preceding failures - starts a timer for any running pods. Once the timeout is hit, the pods will be killed and the workflow will indicate failure
- leaves all pods around for log inspection
"},{"location":"rfcs/0002/readme/#container-assignment-to-rabbit-nodes","title":"Container Assignment to Rabbit Nodes","text":"During Proposal
, the USER must assign compute nodes for the container workflow. The assigned compute nodes determine which Rabbit nodes run the containers.
"},{"location":"rfcs/0002/readme/#container-definition","title":"Container Definition","text":"Containers can be launched in two ways:
- MPI Jobs
- Non-MPI Jobs
MPI Jobs are launched using mpi-operator
. This uses a launcher/worker model. The launcher pod is responsible for running the mpirun
command that will target the worker pods to run the MPI application. The launcher will run on the first targeted NNF node and the workers will run on each of the targeted NNF nodes.
For Non-MPI jobs, mpi-operator
is not used. This model runs the same application on each of the targeted NNF nodes.
The NNF Container Profile allows a user to pick one of these methods. Each method is defined in similar, but different fashions. Since MPI Jobs use mpi-operator
, the MPIJobSpec
is used to define the container(s). For Non-MPI Jobs a PodSpec
is used to define the container(s).
An example of an MPI Job is below. The data.mpiSpec
field is defined:
kind: NnfContainerProfile\napiVersion: nnf.cray.hpe.com/v1alpha1\ndata:\n mpiSpec:\n mpiReplicaSpecs:\n Launcher:\n template:\n spec:\n containers:\n - command:\n - mpirun\n - dcmp\n - $(DW_JOB_foo_local_storage)/0\n - $(DW_JOB_foo_local_storage)/1\n image: ghcr.io/nearnodeflash/nnf-mfu:latest\n name: example-mpi\n Worker:\n template:\n spec:\n containers:\n - image: ghcr.io/nearnodeflash/nnf-mfu:latest\n name: example-mpi\n slotsPerWorker: 1\n...\n
An example of a Non-MPI Job is below. The data.spec
field is defined:
kind: NnfContainerProfile\napiVersion: nnf.cray.hpe.com/v1alpha1\ndata:\n spec:\n containers:\n - command:\n - /bin/sh\n - -c\n - while true; do date && sleep 5; done\n image: alpine:latest\n name: example-forever\n...\n
In both cases, the spec
is used as a starting point to define the containers. NNF software supplements the specification to add functionality (e.g. mounting #DW storages). In other words, what you see here will not be the final spec for the container that ends up running as part of the container workflow.
"},{"location":"rfcs/0002/readme/#security","title":"Security","text":"The workflow's UID and GID are used to run the container application and for mounting the specified fileystems in the container. Kubernetes allows for a way to define permissions for a container using a Security Context.
mpirun
uses ssh
to communicate with the worker nodes. ssh
requires that UID is assigned to a username. Since the UID/GID are dynamic values from the workflow, work must be done to the container's /etc/passwd
to map the UID/GID to a username. An InitContainer
is used to modify /etc/passwd
and mount it into the container.
"},{"location":"rfcs/0002/readme/#communication-details","title":"Communication Details","text":"The following subsections outline the proposed communication between the Rabbit nodes themselves and the Compute nodes.
"},{"location":"rfcs/0002/readme/#rabbit-to-rabbit-communication","title":"Rabbit-to-Rabbit Communication","text":""},{"location":"rfcs/0002/readme/#non-mpi-jobs","title":"Non-MPI Jobs","text":"Each rabbit node can be reached via <hostname>.<subdomain>
using DNS. The hostname is the Rabbit node name and the workflow name is used for the subdomain.
For example, a workflow name of foo
that targets rabbit-node2
would be rabbit-node2.foo
.
Environment variables are provided to the container and ConfigMap for each rabbit that is targeted by the container workflow:
NNF_CONTAINER_NODES=rabbit-node2 rabbit-node3\nNNF_CONTAINER_SUBDOMAIN=foo\nNNF_CONTAINER_DOMAIN=default.svc.cluster.local\n
kind: ConfigMap\napiVersion: v1\ndata:\n nnfContainerNodes:\n - rabbit-node2\n - rabbit-node3\n nnfContainerSubdomain: foo\n nnfContainerDomain: default.svc.cluster.local\n
DNS can then be used to communicate with other Rabbit containers. The FQDN for the container running on rabbit-node2 is rabbit-node2.foo.default.svc.cluster.local
.
"},{"location":"rfcs/0002/readme/#mpi-jobs","title":"MPI Jobs","text":"For MPI Jobs, these hostnames and subdomains will be slightly different due to the implementation of mpi-operator
. However, the variables will remain the same and provide a consistent way to retrieve the values.
"},{"location":"rfcs/0002/readme/#compute-to-rabbit-communication","title":"Compute-to-Rabbit Communication","text":"For Compute to Rabbit communication, the proposal is to use an open port between the nodes, so the applications could communicate using IP protocol. The port number would be assigned by the Rabbit software and included in the workflow resource's environmental variables after the Setup state (similar to workflow name & namespace). Flux should provide the port number to the compute application via an environmental variable or command line argument. The containerized application would always see the same port number using the hostPort
/containerPort
mapping functionality included in Kubernetes. To clarify, the Rabbit software is picking and managing the ports picked for hostPort
.
This requires a range of ports to be open in the firewall configuration and specified in the rabbit system configuration. The fewer the number of ports available increases the chances of a port reservation conflict that would fail a workflow.
Example port range definition in the SystemConfiguration:
apiVersion: v1\nitems:\n - apiVersion: dws.cray.hpe.com/v1alpha1\n kind: SystemConfiguration\n name: default\n namespace: default\n spec:\n containerHostPortRangeMin: 30000\n containerHostPortRangeMax: 40000\n ...\n
"},{"location":"rfcs/0002/readme/#example","title":"Example","text":"For this example, let's assume I've authored an application called foo
. This application requires Rabbit local GFS2 storage and a persistent Lustre storage volume.
Working with an administrator, my application's storage requirements and pod specification are placed in an NNF Container Profile foo
:
kind: NnfContainerProfile\napiVersion: v1alpha1\nmetadata:\n name: foo\n namespace: default\nspec:\n postRunTimeout: 300\n maxRetries: 6\n storages:\n - name: DW_JOB_foo-local-storage\n optional: false\n - name: DW_PERSISTENT_foo-persistent-storage\n optional: false\n spec:\n containers:\n - name: foo\n image: foo:latest\n command:\n - /foo\n ports:\n - name: compute\n containerPort: 80\n
Say Peter wants to use foo
as part of his job specification. Peter would submit the job with the directives below:
#DW jobdw name=my-gfs2 type=gfs2 capacity=1TB\n\n#DW persistentdw name=some-lustre\n\n#DW container name=my-foo profile=foo \\\n DW_JOB_foo-local-storage=my-gfs2 \\\n DW_PERSISTENT_foo-persistent-storage=some-lustre\n
Since the NNF Container Profile has specified that both storages are not optional (i.e. optional: false
), they must both be present in the #DW directives along with the container
directive. Alternatively, if either was marked as optional (i.e. optional: true
), it would not be required to be present in the #DW directives and therefore would not be mounted into the container.
Peter submits the job to the WLM. WLM guides the job through the workflow states:
- Proposal: Rabbit software verifies the #DW directives. For the container directive
my-foo
with profile foo
, the storage requirements listed in the NNF Container Profile are foo-local-storage
and foo-persistent-storage
. These values are correctly represented by the directive so it is valid. - Setup: Since there is a jobdw,
my-gfs2
, Rabbit software provisions this storage. -
Pre-Run:
-
Rabbit software generates a config map that corresponds to the storage requirements and runtime parameters.
kind: ConfigMap\n apiVersion: v1\n metadata:\n name: my-job-container-my-foo\n data:\n DW_JOB_foo_local_storage: mount-type=indexed-mount\n DW_PERSISTENT_foo_persistent_storage: mount-type=mount-point\n ...\n
-
Rabbit software creates a pod and duplicates the foo
pod spec in the NNF Container Profile and fills in the necessary volumes and config map.
kind: Pod\n apiVersion: v1\n metadata:\n name: my-job-container-my-foo\n template:\n metadata:\n name: foo\n namespace: default\n spec:\n containers:\n # This section unchanged from Container Profile\n - name: foo\n image: foo:latest\n command:\n - /foo\n volumeMounts:\n - name: foo-local-storage\n mountPath: <MOUNT_PATH>\n - name: foo-persistent-storage\n mountPath: <MOUNT_PATH>\n - name: nnf-config\n mountPath: /nnf/config\n ports:\n - name: compute\n hostPort: 9376 # hostport selected by Rabbit software\n containerPort: 80\n\n # volumes added by Rabbit software\n volumes:\n - name: foo-local-storage\n hostPath:\n path: /nnf/job/my-job/my-gfs2\n - name: foo-persistent-storage\n hostPath:\n path: /nnf/persistent/some-lustre\n - name: nnf-config\n configMap:\n name: my-job-container-my-foo\n\n # securityContext added by Rabbit software - values will be inherited from the workflow\n securityContext:\n runAsUser: 1000\n runAsGroup: 2000\n fsGroup: 2000\n
-
Rabbit software starts the pods on Rabbit nodes
- Post-Run
- Rabbit waits for all pods to finish (or until timeout is hit)
- If all pods are successful, Post-Run is marked as
Ready
- If any pod is not successful, Post-Run is not marked as
Ready
"},{"location":"rfcs/0002/readme/#special-note-indexed-mount-type-for-gfs2-file-systems","title":"Special Note: Indexed-Mount Type for GFS2 File Systems","text":"When using a GFS2 file system, each compute is allocated its own Rabbit volume. The Rabbit software mounts a collection of mount paths with a common prefix and an ending indexed value.
Application AUTHORS must be aware that their desired mount-point really contains a collection of directories, one for each compute node. The mount point type can be known by consulting the config map values.
If we continue the example from above, the foo
application expects the foo-local-storage path of /foo/local
to contain several directories
$ ls /foo/local/*\n\nnode-0\nnode-1\nnode-2\n...\nnode-N\n
Node positions are not absolute locations. WLM could, in theory, select 6 physical compute nodes at physical location 1, 2, 3, 5, 8, 13, which would appear as directories /node-0
through /node-5
in the container path.
Symlinks will be added to support the physical compute node names. Assuming a compute node hostname of compute-node-1
from the example above, it would link to node-0
, compute-node-2
would link to node-1
, etc.
Additionally, not all container instances could see the same number of compute nodes in an indexed-mount scenario. If 17 compute nodes are required for the job, WLM may assign 16 nodes to run one Rabbit, and 1 node to another Rabbit.
"}]}
\ No newline at end of file
+{"config":{"lang":["en"],"separator":"[\\s\\-,:!=\\[\\]()\"/]+|(?!\\b)(?=[A-Z][a-z])|\\.(?!\\d)|&[lg]t;","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Near Node Flash","text":"Near Node Flash, also known as Rabbit, provides a disaggregated chassis-local storage solution which utilizes SR-IOV over a PCIe Gen 4.0 switching fabric to provide a set of compute blades with NVMe storage. It also provides a dedicated storage processor to offload tasks such as storage preparation and data movement from the compute nodes.
Here you will find NNF User Guides, Examples, and Request For Comment (RFC) documents.
"},{"location":"guides/","title":"User Guides","text":""},{"location":"guides/#setup","title":"Setup","text":" - Initial Setup
- Compute Daemons
- Firmware Upgrade
- High Availability Cluster
- RBAC for Users
"},{"location":"guides/#provisioning","title":"Provisioning","text":" - Storage Profiles
- Data Movement Configuration
- Copy Offload API
- Lustre External MGT
- Global Lustre
- Directive Breakdown
- User Interactions
- System Storage
"},{"location":"guides/#nnf-user-containers","title":"NNF User Containers","text":""},{"location":"guides/#node-management","title":"Node Management","text":" - Disable or Drain a Node
- Debugging NVMe Namespaces
"},{"location":"guides/compute-daemons/readme/","title":"Compute Daemons","text":"Rabbit software requires two daemons be installed and run on each compute node. Each daemon shares similar build, package, and installation processes described below.
- The Client Mount daemon,
clientmount
, provides the support for mounting Rabbit hosted file systems on compute nodes. - The Data Movement daemon,
nnf-dm
, supports creating, monitoring, and managing data movement (copy-offload) operations
"},{"location":"guides/compute-daemons/readme/#building-from-source","title":"Building from source","text":"Each daemon can be built in their respective repositories using the build-daemon
make target. Go version >= 1.19 must be installed to perform a local build.
"},{"location":"guides/compute-daemons/readme/#rpm-package","title":"RPM Package","text":"Each daemon is packaged as part of the build process in GitHub. Source and Binary RPMs are available.
"},{"location":"guides/compute-daemons/readme/#installation","title":"Installation","text":"For manual install, place the binary in the /usr/bin/
directory.
To install the application as a daemon service, run /usr/bin/[BINARY-NAME] install
"},{"location":"guides/compute-daemons/readme/#authentication","title":"Authentication","text":"NNF software defines a Kubernetes Service Account for granting communication privileges between the daemon and the kubeapi server. The token file and certificate file can be obtained by providing the necessary Service Account and Namespace to the below shell script.
Compute Daemon Service Account Namespace Client Mount nnf-clientmount nnf-system Data Movement nnf-dm-daemon nnf-dm-system #!/bin/bash\n\nSERVICE_ACCOUNT=$1\nNAMESPACE=$2\n\nkubectl get secret ${SERVICE_ACCOUNT} -n ${NAMESPACE} -o json | jq -Mr '.data.token' | base64 --decode > ./service.token\nkubectl get secret ${SERVICE_ACCOUNT} -n ${NAMESPACE} -o json | jq -Mr '.data[\"ca.crt\"]' | base64 --decode > ./service.cert\n
The service.token
and service.cert
files must be copied to each compute node, typically in the /etc/[BINARY-NAME]/
directory
"},{"location":"guides/compute-daemons/readme/#configuration","title":"Configuration","text":"Installing the daemon will create a default configuration located at /etc/systemd/system/[BINARY-NAME].service
The command line arguments can be provided to the service definition or as an override file.
Argument Definition --kubernetes-service-host=[ADDRESS]
The IP address or DNS entry of the kubeapi server --kubernetes-service-port=[PORT]
The listening port of the kubeapi server --service-token-file=[PATH]
Location of the service token file --service-cert-file=[PATH]
Location of the service certificate file --node-name=[COMPUTE-NODE-NAME]
Name of this compute node as described in the System Configuration. Defaults to the host name reported by the OS. --nnf-node-name=[RABBIT-NODE-NAME]
nnf-dm
daemon only. Name of the rabbit node connected to this compute node as described in the System Configuration. If not provided, the --node-name
value is used to find the associated Rabbit node in the System Configuration. --sys-config=[NAME]
nnf-dm
daemon only. The System Configuration resource's name. Defaults to default
An example unit file for nnf-dm:
cat /etc/systemd/system/nnf-dm.service[Unit]\nDescription=Near-Node Flash (NNF) Data Movement Service\n\n[Service]\nPIDFile=/var/run/nnf-dm.pid\nExecStartPre=/bin/rm -f /var/run/nnf-dm.pid\nExecStart=/usr/bin/nnf-dm \\\n --kubernetes-service-host=127.0.0.1 \\\n --kubernetes-service-port=7777 \\\n --service-token-file=/path/to/service.token \\\n --service-cert-file=/path/to/service.cert \\\n --kubernetes-qps=50 \\\n --kubernetes-burst=100\nRestart=on-failure\n\n[Install]\nWantedBy=multi-user.target\n
An example unit file is for clientmountd:
cat /etc/systemd/system/clientmountd.service[Unit]\nDescription=Near-Node Flash (NNF) Clientmountd Service\n\n[Service]\nPIDFile=/var/run/clientmountd.pid\nExecStartPre=/bin/rm -f /var/run/clientmountd.pid\nExecStart=/usr/bin/clientmountd \\\n --kubernetes-service-host=127.0.0.1 \\\n --kubernetes-service-port=7777 \\\n --service-token-file=/path/to/service.token \\\n --service-cert-file=/path/to/service.cert\nRestart=on-failure\nEnvironment=GOGC=off\nEnvironment=GOMEMLIMIT=20MiB\nEnvironment=GOMAXPROCS=5\nEnvironment=HTTP2_PING_TIMEOUT_SECONDS=60\n\n[Install]\nWantedBy=multi-user.target\n
"},{"location":"guides/compute-daemons/readme/#nnf-dm-specific-configuration","title":"nnf-dm Specific Configuration","text":"nnf-dm has some additional configuration options that can be used to tweak the kubernetes client:
Argument Definition --kubernetes-qps=[QPS]
The number of Queries Per Second (QPS) before client-side rate-limiting starts. Defaults to 50. --kubernetes-burst=[QPS]
Once QPS is hit, allow this many concurrent calls. Defaults to 100."},{"location":"guides/compute-daemons/readme/#easy-deployment","title":"Easy Deployment","text":"The nnf-deploy tool's install
command can be used to run the daemons on a system's set of compute nodes. This option will compile the latest daemon binaries, retrieve the service token and certificates, and will copy and install the daemons on each of the compute nodes. Refer to the nnf-deploy repository and run nnf-deploy install --help
for details.
"},{"location":"guides/data-movement/readme/","title":"Data Movement Configuration","text":"Data Movement can be configured in multiple ways:
- Server side
- Per Copy Offload API Request arguments
The first method is a \"global\" configuration - it affects all data movement operations. The second is done per the Copy Offload API, which allows for some configuration on a per-case basis, but is limited in scope. Both methods are meant to work in tandem.
"},{"location":"guides/data-movement/readme/#server-side-configmap","title":"Server Side ConfigMap","text":"The server side configuration is done via the nnf-dm-config
config map:
kubectl -n nnf-dm-system get configmap nnf-dm-config\n
The config map allows you to configure the following:
Setting Description slots The number of slots specified in the MPI hostfile. A value less than 1 disables the use of slots in the hostfile. maxSlots The number of max_slots specified in the MPI hostfile. A value less than 1 disables the use of max_slots in the hostfile. command The full command to execute data movement. More detail in the following section. progressIntervalSeconds interval to collect the progress data from the dcp
command."},{"location":"guides/data-movement/readme/#command","title":"command
","text":"The full data movement command
can be set here. By default, Data Movement uses mpirun
to run dcp
to perform the data movement. Changing the command
is useful for tweaking mpirun
or dcp
options or to replace the command with something that can aid in debugging (e.g. hostname
).
mpirun
uses hostfiles to list the hosts to launch dcp
on. This hostfile is created for each Data Movement operation, and it uses the config map to set the slots
and maxSlots
for each host (i.e. NNF node) in the hostfile. The number of slots
/maxSlots
is the same for every host in the hostfile.
Additionally, Data Movement uses substitution to fill in dynamic information for each Data Movement operation. Each of these must be present in the command for Data Movement to work properly when using mpirun
and dcp
:
VAR Description $HOSTFILE
hostfile that is created and used for mpirun. $UID
User ID that is inherited from the Workflow. $GID
Group ID that is inherited from the Workflow. $SRC
source for the data movement. $DEST
destination for the data movement. By default, the command will look something like the following. Please see the config map itself for the most up to date default command:
mpirun --allow-run-as-root --hostfile $HOSTFILE dcp --progress 1 --uid $UID --gid $GID $SRC $DEST\n
"},{"location":"guides/data-movement/readme/#profiles","title":"Profiles","text":"Profiles can be specified in the in the nnf-dm-config
config map. Users are able to select a profile using #DW directives (e.g .copy_in profile=my-dm-profile
) and the Copy Offload API. If no profile is specified, the default
profile is used. This default profile must exist in the config map.
slots
, maxSlots
, and command
can be stored in Data Movement profiles. These profiles are available to quickly switch between different settings for a particular workflow.
Example profiles:
profiles:\n default:\n slots: 8\n maxSlots: 0\n command: mpirun --allow-run-as-root --hostfile $HOSTFILE dcp --progress 1 --uid $UID --gid $GID $SRC $DEST\n no-xattrs:\n slots: 8\n maxSlots: 0\n command: mpirun --allow-run-as-root --hostfile $HOSTFILE dcp --progress 1 --xattrs none --uid $UID --gid $GID $SRC $DEST\n
"},{"location":"guides/data-movement/readme/#copy-offload-api-daemon","title":"Copy Offload API Daemon","text":"The CreateRequest
API call that is used to create Data Movement with the Copy Offload API has some options to allow a user to specify some options for that particular Data Movement. These settings are on a per-request basis.
The Copy Offload API requires the nnf-dm
daemon to be running on the compute node. This daemon may be configured to run full-time, or it may be left in a disabled state if the WLM is expected to run it only when a user requests it. See Compute Daemons for the systemd service configuration of the daemon. See RequiredDaemons
in Directive Breakdown for a description of how the user may request the daemon, in the case where the WLM will run it only on demand.
If the WLM is running the nnf-dm
daemon only on demand, then the user can request that the daemon be running for their job by specifying requires=copy-offload
in their DW
directive. The following is an example:
#DW jobdw type=xfs capacity=1GB name=stg1 requires=copy-offload\n
See the DataMovementCreateRequest API definition for what can be configured.
"},{"location":"guides/data-movement/readme/#selinux-and-data-movement","title":"SELinux and Data Movement","text":"Careful consideration must be taken when enabling SELinux on compute nodes. Doing so will result in SELinux Extended File Attributes (xattrs) being placed on files created by applications running on the compute node, which may not be supported by the destination file system (e.g. Lustre).
Depending on the configuration of dcp
, there may be an attempt to copy these xattrs. You may need to disable this by using dcp --xattrs none
to avoid errors. For example, the command
in the nnf-dm-config
config map or dcpOptions
in the DataMovementCreateRequest API could be used to set this option.
See the dcp
documentation for more information.
"},{"location":"guides/directive-breakdown/readme/","title":"Directive Breakdown","text":""},{"location":"guides/directive-breakdown/readme/#background","title":"Background","text":"The #DW
directives in a job script are not intended to be interpreted by the workload manager. The workload manager passes the #DW
directives to the NNF software through the DWS workflow
resource, and the NNF software determines what resources are needed to satisfy the directives. The NNF software communicates this information back to the workload manager through the DWS DirectiveBreakdown
resource. This document describes how the WLM should interpret the information in the DirectiveBreakdown
.
"},{"location":"guides/directive-breakdown/readme/#directivebreakdown-overview","title":"DirectiveBreakdown Overview","text":"The DWS DirectiveBreakdown
contains all the information necessary to inform the WLM how to pick storage and compute nodes for a job. The DirectiveBreakdown
resource is created by the NNF software during the Proposal
phase of the DWS workflow. The spec
section of the DirectiveBreakdown
is filled in with the #DW
directive by the NNF software, and the status
section contains the information for the WLM. The WLM should wait until the status.ready
field is true before interpreting the rest of the status
fields.
The contents of the DirectiveBreakdown
will look different depending on the file system type and options specified by the user. The status
section contains enough information that the WLM may be able to figure out the underlying file system type requested by the user, but the WLM should not make any decisions based on the file system type. Instead, the WLM should make storage and compute allocation decisions based on the generic information provided in the DirectiveBreakdown
since the storage and compute allocations needed to satisfy a #DW
directive may differ based on options other than the file system type.
"},{"location":"guides/directive-breakdown/readme/#storage-nodes","title":"Storage Nodes","text":"The status.storage
section of the DirectiveBreakdown
describes how the storage allocations should be made and any constraints on the NNF nodes that can be picked. The status.storage
section will exist only for jobdw
and create_persistent
directives. An example of the status.storage
section is included below.
...\nspec:\n directive: '#DW jobdw capacity=1GiB type=xfs name=example'\n userID: 7900\nstatus:\n...\n ready: true\n storage:\n allocationSets:\n - allocationStrategy: AllocatePerCompute\n constraints:\n labels:\n - dataworkflowservices.github.io/storage=Rabbit\n label: xfs\n minimumCapacity: 1073741824\n lifetime: job\n reference:\n kind: Servers\n name: example-0\n namespace: default\n...\n
-
status.storage.allocationSets
is a list of storage allocation sets that are needed for the job. An allocation set is a group of individual storage allocations that all have the same parameters and requirements. Depending on the storage type specified by the user, there may be more than one allocation set. Allocation sets should be handled independently.
-
status.storage.allocationSets.allocationStrategy
specifies how the allocations should be made.
AllocatePerCompute
- One allocation is needed per compute node in the job. The size of an individual allocation is specified in status.storage.allocationSets.minimumCapacity
AllocateAcrossServers
- One or more allocations are needed with an aggregate capacity of status.storage.allocationSets.minimumCapacity
. This allocation strategy does not imply anything about how many allocations to make per NNF node or how many NNF nodes to use. The allocations on each NNF node should be the same size. AllocateSingleServer
- One allocation is needed with a capacity of status.storage.allocationSets.minimumCapacity
-
status.storage.allocationSets.constraints
is a set of requirements for which NNF nodes can be picked. More information about the different constraint types is provided in the Storage Constraints section below.
-
status.storage.allocationSets.label
is an opaque string that the WLM uses when creating the spec.allocationSets entry in the DWS Servers
resource.
-
status.storage.allocationSets.minimumCapacity
is the allocation capacity in bytes. The interpretation of this field depends on the value of status.storage.allocationSets.allocationStrategy
-
status.storage.lifetime
is used to specify how long the storage allocations will last.
job
- The allocation will last for the lifetime of the job persistent
- The allocation will last for longer than the lifetime of the job
-
status.storage.reference
is an object reference to a DWS Servers
resource where the WLM can specify allocations
"},{"location":"guides/directive-breakdown/readme/#storage-constraints","title":"Storage Constraints","text":"Constraints on an allocation set provide additional requirements for how the storage allocations should be made on NNF nodes.
-
labels
specifies a list of labels that must all be on a DWS Storage
resource in order for an allocation to exist on that Storage
.
constraints:\n labels:\n - dataworkflowservices.github.io/storage=Rabbit\n - mysite.org/pool=firmware_test\n
apiVersion: dataworkflowservices.github.io/v1alpha2\nkind: Storage\nmetadata:\n labels:\n dataworkflowservices.github.io/storage: Rabbit\n mysite.org/pool: firmware_test\n mysite.org/drive-speed: fast\n name: rabbit-node-1\n namespace: default\n ...\n
-
colocation
specifies how two or more allocations influence the location of each other. The colocation constraint has two fields, type
and key
. Currently, the only value for type
is exclusive
. key
can be any value. This constraint means that the allocations from an allocation set with the colocation constraint can't be placed on an NNF node with another allocation whose allocation set has a colocation constraint with the same key. Allocations from allocation sets with colocation constraints with different keys or allocation sets without the colocation constraint are okay to put on the same NNF node.
constraints:\n colocation:\n type: exclusive\n key: lustre-mgt\n
-
count
this field specifies the number of allocations to make when status.storage.allocationSets.allocationStrategy
is AllocateAcrossServers
constraints:\n count: 5\n
-
scale
is a unitless value from 1-10 that is meant to guide the WLM on how many allocations to make when status.storage.allocationSets.allocationStrategy
is AllocateAcrossServers
. The actual number of allocations is not meant to correspond to the value of scale. Rather, 1 would indicate the minimum number of allocations to reach status.storage.allocationSets.minimumCapacity
, and 10 would be the maximum number of allocations that make sense given the status.storage.allocationSets.minimumCapacity
and the compute node count. The NNF software does not interpret this value, and it is up to the WLM to define its meaning.
constraints:\n scale: 8\n
"},{"location":"guides/directive-breakdown/readme/#compute-nodes","title":"Compute Nodes","text":"The status.compute
section of the DirectiveBreakdown
describes how the WLM should pick compute nodes for a job. The status.compute
section will exist only for jobdw
and persistentdw
directives. An example of the status.compute
section is included below.
...\nspec:\n directive: '#DW jobdw capacity=1TiB type=lustre name=example'\n userID: 3450\nstatus:\n...\n compute:\n constraints:\n location:\n - access:\n - priority: mandatory\n type: network\n - priority: bestEffort\n type: physical\n reference:\n fieldPath: servers.spec.allocationSets[0]\n kind: Servers\n name: example-0\n namespace: default\n - access:\n - priority: mandatory\n type: network\n reference:\n fieldPath: servers.spec.allocationSets[1]\n kind: Servers\n name: example-0\n namespace: default\n...\n
The status.compute.constraints
section lists any constraints on which compute nodes can be used. Currently the only constraint type is the location
constraint. status.compute.constraints.location
is a list of location constraints that all must be satisfied.
A location constraint consists of an access
list and a reference
.
status.compute.constraints.location.reference
is an object reference with a fieldPath
that points to an allocation set in the Servers
resource. If this is from a #DW jobdw
directive, the Servers
resource won't be filled in until the WLM picks storage nodes for the allocations. status.compute.constraints.location.access
is a list that specifies what type of access the compute nodes need to have to the storage allocations in the allocation set. An allocation set may have multiple access types that are required status.compute.constraints.location.access.type
specifies the connection type for the storage. This can be network
or physical
status.compute.constraints.location.access.priority
specifies how necessary the connection type is. This can be mandatory
or bestEffort
"},{"location":"guides/directive-breakdown/readme/#requireddaemons","title":"RequiredDaemons","text":"The status.requiredDaemons
section of the DirectiveBreakdown
tells the WLM about any driver-specific daemons it must enable for the job; it is assumed that the WLM knows about the driver-specific daemons and that if the users are specifying these then the WLM knows how to start them. The status.requiredDaemons
section will exist only for jobdw
and persistentdw
directives. An example of the status.requiredDaemons
section is included below.
status:\n...\n requiredDaemons:\n - copy-offload\n...\n
The allowed list of required daemons that may be specified is defined in the nnf-ruleset.yaml for DWS, found in the nnf-sos
repository. The ruleDefs.key[requires]
statement is specified in two places in the ruleset, one for jobdw
and the second for persistentdw
. The ruleset allows a list of patterns to be specified, allowing one for each of the allowed daemons.
The DW
directive will include a comma-separated list of daemons after the requires
keyword. The following is an example:
#DW jobdw type=xfs capacity=1GB name=stg1 requires=copy-offload\n
The DWDirectiveRule
resource currently active on the system can be viewed with:
kubectl get -n dws-system dwdirectiverule nnf -o yaml\n
"},{"location":"guides/directive-breakdown/readme/#valid-daemons","title":"Valid Daemons","text":"Each site should define the list of daemons that are valid for that site and recognized by that site's WLM. The initial nnf-ruleset.yaml
defines only one, called copy-offload
. When a user specifies copy-offload
in their DW
directive, they are stating that their compute-node application will use the Copy Offload API Daemon described in the Data Movement Configuration.
"},{"location":"guides/external-mgs/readme/","title":"Lustre External MGT","text":""},{"location":"guides/external-mgs/readme/#background","title":"Background","text":"Lustre has a limitation where only a single MGT can be mounted on a node at a time. In some situations it may be desirable to share an MGT between multiple Lustre file systems to increase the number of Lustre file systems that can be created and to decrease scheduling complexity. This guide provides instructions on how to configure NNF to share MGTs. There are three methods that can be used:
- Use a Lustre MGT from outside the NNF cluster
- Create a persistent Lustre file system through DWS and use the MGT it provides
- Create a pool of standalone persistent Lustre MGTs, and have the NNF software select one of them
These three methods are not mutually exclusive on the system as a whole. Individual file systems can use any of options 1-3 or create their own MGT.
"},{"location":"guides/external-mgs/readme/#configuration-with-an-external-mgt","title":"Configuration with an External MGT","text":""},{"location":"guides/external-mgs/readme/#storage-profile","title":"Storage Profile","text":"An existing MGT external to the NNF cluster can be used to manage the Lustre file systems on the NNF nodes. An advantage to this configuration is that the MGT can be highly available through multiple MGSs. A disadvantage is that there is only a single MGT. An MGT shared between more than a handful of Lustre file systems is not a common use case, so the Lustre code may prove less stable.
The following yaml provides an example of what the NnfStorageProfile
should contain to use an MGT on an external server.
apiVersion: nnf.cray.hpe.com/v1alpha1\nkind: NnfStorageProfile\nmetadata:\n name: external-mgt\n namespace: nnf-system\ndata:\n[...]\n lustreStorage:\n externalMgs: 1.2.3.4@eth0:1.2.3.5@eth0\n combinedMgtMdt: false\n standaloneMgtPoolName: \"\"\n[...]\n
"},{"location":"guides/external-mgs/readme/#nnflustremgt","title":"NnfLustreMGT","text":"A NnfLustreMGT
resource tracks which fsnames have been used on the MGT to prevent fsname re-use. Any Lustre file systems that are created through the NNF software will request an fsname to use from a NnfLustreMGT
resource. Every MGT must have a corresponding NnfLustreMGT
resource. For MGTs that are hosted on NNF hardware, the NnfLustreMGT
resources are created automatically. The NNF software also erases any unused fsnames from the MGT disk for any internally hosted MGTs.
For a MGT hosted on an external node, an admin must create an NnfLustreMGT
resource. This resource ensures that fsnames will be created in a sequential order without any fsname re-use. However, after an fsname is no longer in use by a file system, it will not be erased from the MGT disk. An admin may decide to periodically run the lctl erase_lcfg [fsname]
command to remove fsnames that are no longer in use.
Below is an example NnfLustreMGT
resource. The NnfLustreMGT
resource for external MGSs must be created in the nnf-system
namespace.
apiVersion: nnf.cray.hpe.com/v1alpha1\nkind: NnfLustreMGT\nmetadata:\n name: external-mgt\n namespace: nnf-system\nspec:\n addresses:\n - \"1.2.3.4@eth0:1.2.3.5@eth0\"\n fsNameStart: \"aaaaaaaa\"\n fsNameBlackList:\n - \"mylustre\"\n fsNameStartReference:\n name: external-mgt\n namespace: default\n kind: ConfigMap\n
addresses
- This is a list of LNet addresses that could be used for this MGT. This should match any values that are used in the externalMgs
field in the NnfStorageProfiles
. fsNameStart
- The first fsname to use. Subsequent fsnames will be incremented based on this starting fsname (e.g, aaaaaaaa
, aaaaaaab
, aaaaaaac
). fsnames use lowercase letters 'a'
-'z'
. fsNameStart
should be exactly 8 characters long. fsNameBlackList
- This is a list of fsnames that should not be given to any NNF Lustre file systems. If the MGT is hosting any non-NNF Lustre file systems, their fsnames should be included in this blacklist. fsNameStartReference
- This is an optional ObjectReference
to a ConfigMap
that holds a starting fsname. If this field is specified, it takes precedence over the fsNameStart
field in the spec. The ConfigMap
will be updated to the next available fsname every time an fsname is assigned to a new Lustre file system.
"},{"location":"guides/external-mgs/readme/#configmap","title":"ConfigMap","text":"For external MGTs, the fsNameStartReference
should be used to point to a ConfigMap
in the default
namespace. The ConfigMap
should be left empty initially. The ConfigMap
is used to hold the value of the next available fsname, and it should not be deleted or modified while a NnfLustreMGT
resource is referencing it. Removing the ConfigMap
will cause the Rabbit software to lose track of which fsnames have already been used on the MGT. This is undesireable unless the external MGT is no longer being used by Rabbit software or if an admin has erased all previously used fsnames with the lctl erase_lcfg [fsname]
command.
When using the ConfigMap
, the nnf-sos software may be undeployed and redeployed without losing track of the next fsname value. During an undeploy, the NnfLustreMGT
resource will be removed. During a deploy, the NnfLustreMGT
resource will read the fsname value from the ConfigMap
if it is present. The value in the ConfigMap
will override the fsname in the fsNameStart
field.
"},{"location":"guides/external-mgs/readme/#configuration-with-persistent-lustre","title":"Configuration with Persistent Lustre","text":"The MGT from a persistent Lustre file system hosted on the NNF nodes can also be used as the MGT for other NNF Lustre file systems. This configuration has the advantage of not relying on any hardware outside of the cluster. However, there is no high availability, and a single MGT is still shared between all Lustre file systems created on the cluster.
To configure a persistent Lustre file system that can share its MGT, a NnfStorageProfile
should be used that does not specify externalMgs
. The MGT can either share a volume with the MDT or not (combinedMgtMdt
).
apiVersion: nnf.cray.hpe.com/v1alpha1\nkind: NnfStorageProfile\nmetadata:\n name: persistent-lustre-shared-mgt\n namespace: nnf-system\ndata:\n[...]\n lustreStorage:\n externalMgs: \"\"\n combinedMgtMdt: false\n standaloneMgtPoolName: \"\"\n[...]\n
The persistent storage is created with the following DW directive:
#DW create_persistent name=shared-lustre capacity=100GiB type=lustre profile=persistent-lustre-shared-mgt\n
After the persistent Lustre file system is created, an admin can discover the MGS address by looking at the NnfStorage
resource with the same name as the persistent storage that was created (shared-lustre
in the above example).
apiVersion: nnf.cray.hpe.com/v1alpha1\nkind: NnfStorage\nmetadata:\n name: shared-lustre\n namespace: default\n[...]\nstatus:\n mgsNode: 5.6.7.8@eth1\n[...]\n
A separate NnfStorageProfile
can be created that specifies the MGS address.
apiVersion: nnf.cray.hpe.com/v1alpha1\nkind: NnfStorageProfile\nmetadata:\n name: internal-mgt\n namespace: nnf-system\ndata:\n[...]\n lustreStorage:\n externalMgs: 5.6.7.8@eth1\n combinedMgtMdt: false\n standaloneMgtPoolName: \"\"\n[...]\n
With this configuration, an admin must determine that no file systems are using the shared MGT before destroying the persistent Lustre instance.
"},{"location":"guides/external-mgs/readme/#configuration-with-an-internal-mgt-pool","title":"Configuration with an Internal MGT Pool","text":"Another method NNF supports is to create a number of persistent Lustre MGTs on NNF nodes. These MGTs are not part of a full file system, but are instead added to a pool of MGTs available for other Lustre file systems to use. Lustre file systems that are created will choose one of the MGTs at random to use and add a reference to make sure it isn't destroyed. This configuration has the advantage of spreading the Lustre management load across multiple servers. The disadvantage of this configuration is that it does not provide high availability.
To configure the system this way, the first step is to make a pool of Lustre MGTs. This is done by creating a persistent instance from a storage profile that specifies the standaloneMgtPoolName
option. This option tells NNF software to only create an MGT, and to add it to a named pool. The following NnfStorageProfile
provides an example where the MGT is added to the example-pool
pool:
apiVersion: nnf.cray.hpe.com/v1alpha1\nkind: NnfStorageProfile\nmetadata:\n name: mgt-pool-member\n namespace: nnf-system\ndata:\n[...]\n lustreStorage:\n externalMgs: \"\"\n combinedMgtMdt: false\n standaloneMgtPoolName: \"example-pool\"\n[...]\n
A persistent storage MGTs can be created with the following DW directive:
#DW create_persistent name=mgt-pool-member-1 capacity=1GiB type=lustre profile=mgt-pool-member\n
Multiple persistent instances with different names can be created using the mgt-pool-member
profile to add more than one MGT to the pool.
To create a Lustre file system that uses one of the MGTs from the pool, an NnfStorageProfile
should be created that uses the special notation pool:[pool-name]
in the externalMgs
field.
apiVersion: nnf.cray.hpe.com/v1alpha1\nkind: NnfStorageProfile\nmetadata:\n name: mgt-pool-consumer\n namespace: nnf-system\ndata:\n[...]\n lustreStorage:\n externalMgs: \"pool:example-pool\"\n combinedMgtMdt: false\n standaloneMgtPoolName: \"\"\n[...]\n
The following provides an example DW directive that uses an MGT from the MGT pool:
#DW jobdw name=example-lustre capacity=100GiB type=lustre profile=mgt-pool-consumer\n
MGT pools are named, so there can be separate pools with collections of different MGTs in them. A storage profile targeting each pool would be needed.
"},{"location":"guides/firmware-upgrade/readme/","title":"Firmware Upgrade Procedures","text":"This guide presents the firmware upgrade procedures to upgrade firmware from the Rabbit using tools present in the operating system.
"},{"location":"guides/firmware-upgrade/readme/#pcie-switch-firmware-upgrade","title":"PCIe Switch Firmware Upgrade","text":"In order to upgrade the firmware on the PCIe switch, the switchtec
kernel driver and utility of the same name must be installed. Rabbit hardware consists of two PCIe switches, which can be managed by devices typically located at /dev/switchtec0
and /dev/switchtec1
.
Danger
Upgrading the switch firmware will cause the switch to reset. Prototype Rabbit units not supporting hotplug should undergo a power-cycle to ensure switch initialization following firmware uprade. Similarily, compute nodes not supporting hotplug may lose connectivity after firmware upgrade and should also be power-cycled.
IMAGE=$1 # Provide the path to the firmware image file\nSWITCHES=(\"/dev/switchtec0\" \"/dev/switchtec1\")\nfor SWITCH in \"${SWITCHES[@]}\"; do switchtec fw-update \"$SWITCH\" \"$IMAGE\" --yes; done\n
"},{"location":"guides/firmware-upgrade/readme/#nvme-drive-firmware-upgrade","title":"NVMe Drive Firmware Upgrade","text":"In order to upgrade the firmware on NVMe drives attached to Rabbit, the switchtec
and switchtec-nvme
executables must be installed. All firmware downloads to drives are sent to the physical function of the drive which is accessible only using the switchtec-nvme
executable.
"},{"location":"guides/firmware-upgrade/readme/#batch-method","title":"Batch Method","text":""},{"location":"guides/firmware-upgrade/readme/#download-and-commit-new-firmware","title":"Download and Commit New Firmware","text":"The nvme.sh helper script applies the same command to each physical device fabric ID in the system. It provides a convenient way to upgrade the firmware on all drives in the system. Please see fw-download and fw-commit for details about the individual commands.
# Download firmware to all drives\n./nvme.sh cmd fw-download --fw=</path/to/nvme.fw>\n\n# Commit the new firmware\n# action=3: The image is requested to be activated immediately\n./nvme.sh cmd fw-commit --action=3\n
"},{"location":"guides/firmware-upgrade/readme/#rebind-the-pcie-connections","title":"Rebind the PCIe Connections","text":"In order to use the drives at this point, they must be unbound and bound to the PCIe fabric to reset device connections. The bind.sh helper script performs these two actions. Its use is illustrated below.
# Unbind all drives from the Rabbit to disconnect the PCIe connection to the drives\n./bind.sh unbind\n\n# Bind all drives to the Rabbit to reconnect the PCIe bus\n./bind.sh bind\n\n# At this point, your drives should be running the new firmware.\n# Verify the firmware...\n./nvme.sh cmd id-ctrl | grep -E \"^fr \"\n
"},{"location":"guides/firmware-upgrade/readme/#individual-drive-method","title":"Individual Drive Method","text":""},{"location":"guides/firmware-upgrade/readme/#determine-physical-device-fabric-id","title":"Determine Physical Device Fabric ID","text":"The first step is to determine a drive's unique Physical Device Fabric Identifier (PDFID). The following code fragment demonstrates one way to list the physcial device fabric ids of all the NVMe drives in the system.
#!/bin/bash\n\nSWITCHES=(\"/dev/switchtec0\" \"/dev/switchtec1\")\nfor SWITCH in \"${SWITCHES[@]}\";\ndo\n mapfile -t PDFIDS < <(sudo switchtec fabric gfms-dump \"${SWITCH}\" | grep \"Function 0 \" -A1 | grep PDFID | awk '{print $2}')\n for INDEX in \"${!PDFIDS[@]}\";\n do\n echo \"${PDFIDS[$INDEX]}@$SWITCH\"\n done\ndone\n
# Produces a list like this:\n0x1300@/dev/switchtec0\n0x1600@/dev/switchtec0\n0x1700@/dev/switchtec0\n0x1400@/dev/switchtec0\n0x1800@/dev/switchtec0\n0x1900@/dev/switchtec0\n0x1500@/dev/switchtec0\n0x1a00@/dev/switchtec0\n0x4100@/dev/switchtec1\n0x3c00@/dev/switchtec1\n0x4000@/dev/switchtec1\n0x3e00@/dev/switchtec1\n0x4200@/dev/switchtec1\n0x3b00@/dev/switchtec1\n0x3d00@/dev/switchtec1\n0x3f00@/dev/switchtec1\n
"},{"location":"guides/firmware-upgrade/readme/#download-firmware","title":"Download Firmware","text":"Using the physical device fabric identifier, the following commands update the firmware for specified drive.
# Download firmware to the drive\nsudo switchtec-nvme fw-download <PhysicalDeviceFabricID> --fw=</path/to/nvme.fw>\n\n# Activate the new firmware\n# action=3: The image is requested to be activated immediately without reset.\nsudo switchtec-nvme fw-commit --action=3\n
"},{"location":"guides/firmware-upgrade/readme/#rebind-pcie-connection","title":"Rebind PCIe Connection","text":"Once the firmware has been downloaded and committed, the PCIe connection from the Rabbit to the drive must be unbound and rebound. Please see bind.sh for details.
"},{"location":"guides/global-lustre/readme/","title":"Global Lustre","text":""},{"location":"guides/global-lustre/readme/#background","title":"Background","text":"Adding global lustre to rabbit systems allows access to external file systems. This is primarily used for Data Movement, where a user can perform copy_in
and copy_out
directives with global lustre being the source and destination, respectively.
Global lustre fileystems are represented by the lustrefilesystems
resource in Kubernetes:
$ kubectl get lustrefilesystems -A\nNAMESPACE NAME FSNAME MGSNIDS AGE\ndefault mylustre mylustre 10.1.1.113@tcp 20d\n
An example resource is as follows:
apiVersion: lus.cray.hpe.com/v1beta1\nkind: LustreFileSystem\nmetadata:\n name: mylustre\n namespace: default\nspec:\n mgsNids: 10.1.1.100@tcp\n mountRoot: /p/mylustre\n name: mylustre\n namespaces:\n default:\n modes:\n - ReadWriteMany\n
"},{"location":"guides/global-lustre/readme/#namespaces","title":"Namespaces","text":"Note the spec.namespaces
field. For each namespace listed, the lustre-fs-operator
creates a PV/PVC pair in that namespace. This allows pods in that namespace to access global lustre. The default
namespace should appear in this list. This makes the lustrefilesystem
resource available to the default
namespace, which makes it available to containers (e.g. container workflows) running in the default
namespace.
The nnf-dm-system
namespace is added automatically - no need to specify that manually here. The NNF Data Movement Manager is responsible for ensuring that the nnf-dm-system
is in spec.namespaces
. This is to ensure that the NNF DM Worker pods have global lustre mounted as long as nnf-dm
is deployed. To unmount global lustre from the NNF DM Worker pods, the lustrefilesystem
resource must be deleted.
The lustrefilesystem
resource itself should be created in the default
namespace (i.e. metadata.namespace
).
"},{"location":"guides/global-lustre/readme/#nnf-data-movement-manager","title":"NNF Data Movement Manager","text":"The NNF Data Movement Manager is responsible for monitoring lustrefilesystem
resources to mount (or umount) the global lustre filesystem in each of the NNF DM Worker pods. These pods run on each of the NNF nodes. This means with each addition or removal of lustrefilesystems
resources, the DM worker pods restart to adjust their mount points.
The NNF Data Movement Manager also places a finalizer on the lustrefilesystem
resource to indicate that the resource is in use by Data Movement. This is to prevent the PV/PVC being deleted while they are being used by pods.
"},{"location":"guides/global-lustre/readme/#adding-global-lustre","title":"Adding Global Lustre","text":"As mentioned previously, the NNF Data Movement Manager monitors these resources and automatically adds the nnf-dm-system
namespace to all lustrefilesystem
resources. Once this happens, a PV/PVC is created for the nnf-dm-system
namespace to access global lustre. The Manager updates the NNF DM Worker pods, which are then restarted to mount the global lustre file system.
"},{"location":"guides/global-lustre/readme/#removing-global-lustre","title":"Removing Global Lustre","text":"When a lustrefilesystem
is deleted, the NNF DM Manager takes notice and starts to unmount the file system from the DM Worker pods - causing another restart of the DM Worker pods. Once this is finished, the DM finalizer is removed from the lustrefilesystem
resource to signal that it is no longer in use by Data Movement.
If a lustrefilesystem
does not delete, check the finalizers to see what might still be using it. It is possible to get into a situation where nnf-dm
has been undeployed, so there is nothing to remove the DM finalizer from the lustrefilesystem
resource. If that is the case, then manually remove the DM finalizer so the deletion of the lustrefilesystem
resource can continue.
"},{"location":"guides/ha-cluster/notes/","title":"Notes","text":"pcs stonith create stonith-rabbit-node-1 fence_nnf pcmk_host_list=rabbit-node-1 kubernetes-service-host=10.30.107.247 kubernetes-service-port=6443 service-token-file=/etc/nnf/service.token service-cert-file=/etc/nnf/service.cert nnf-node-name=rabbit-node-1 verbose=1
pcs stonith create stonith-rabbit-compute-2 fence_redfish pcmk_host_list=\"rabbit-compute-2\" ip=10.30.105.237 port=80 systems-uri=/redfish/v1/Systems/1 username=root password=REDACTED ssl_insecure=true verbose=1
pcs stonith create stonith-rabbit-compute-3 fence_redfish pcmk_host_list=\"rabbit-compute-3\" ip=10.30.105.253 port=80 systems-uri=/redfish/v1/Systems/1 username=root password=REDACTED ssl_insecure=true verbose=1
"},{"location":"guides/ha-cluster/readme/","title":"High Availability Cluster","text":"NNF software supports provisioning of Red Hat GFS2 (Global File System 2) storage. Per RedHat:
GFS2 allows multiple nodes to share storage at a block level as if the storage were connected locally to each cluster node. GFS2 cluster file system requires a cluster infrastructure.
Therefore, in order to use GFS2, the NNF node and its associated compute nodes must form a high availability cluster.
"},{"location":"guides/ha-cluster/readme/#cluster-setup","title":"Cluster Setup","text":"Red Hat provides instructions for creating a high availability cluster with Pacemaker, including instructions for installing cluster software and creating a high availability cluster. When following these instructions, each of the high availability clusters that are created should be named after the hostname of the NNF node. In the Red Hat examples the cluster name is my_cluster
.
"},{"location":"guides/ha-cluster/readme/#fencing-agents","title":"Fencing Agents","text":"Fencing is the process of restricting and releasing access to resources that a failed cluster node may have access to. Since a failed node may be unresponsive, an external device must exist that can restrict access to shared resources of that node, or to issue a hard reboot of the node. More information can be found form Red Hat: 1.2.1 Fencing.
HPE hardware implements software known as the Hardware System Supervisor (HSS), which itself conforms to the SNIA Redfish/Swordfish standard. This provides the means to manage hardware outside the host OS.
"},{"location":"guides/ha-cluster/readme/#nnf-fencing","title":"NNF Fencing","text":""},{"location":"guides/ha-cluster/readme/#source","title":"Source","text":"The NNF Fencing agent is available at https://github.com/NearNodeFlash/fence-agents under the nnf
branch.
git clone https://github.com/NearNodeFlash/fence-agents --branch nnf\n
"},{"location":"guides/ha-cluster/readme/#build","title":"Build","text":"Refer to the NNF.md file
at the root directory of the fence-agents repository. The fencing agents must be installed on every node in the cluster.
"},{"location":"guides/ha-cluster/readme/#setup","title":"Setup","text":"Configure the NNF agent with the following parameters:
Argument Definition kubernetes-service-host=[ADDRESS]
The IP address of the kubeapi server kubernetes-service-port=[PORT]
The listening port of the kubeapi server service-token-file=[PATH]
The location of the service token file. The file must be present on all nodes within the cluster service-cert-file=[PATH]
The location of the service certificate file. The file must be present on all nodes within the cluster nnf-node-name=[NNF-NODE-NAME]
Name of the NNF node as it is appears in the System Configuration api-version=[VERSION]
The API Version of the NNF Node resource. Defaults to \"v1alpha1\" The token and certificate can be found in the Kubernetes Secrets resource for the nnf-system/nnf-fencing-agent ServiceAccount. This provides RBAC rules to limit the fencing agent to only the Kubernetes resources it needs access to.
For example, setting up the NNF fencing agent on rabbit-node-1
with a kubernetes service API running at 192.168.0.1:6443
and the service token and certificate copied to /etc/nnf/fence/
. This needs to be run on one node in the cluster.
pcs stonith create rabbit-node-1 fence_nnf pcmk_host_list=rabbit-node-1 kubernetes-service-host=192.168.0.1 kubernetes-service-port=6443 service-token-file=/etc/nnf/fence/service.token service-cert-file=/etc/nnf/fence/service.cert nnf-node-name=rabbit-node-1\n
"},{"location":"guides/ha-cluster/readme/#recovery","title":"Recovery","text":"Since the NNF node is connected to 16 compute blades, careful coordination around fencing of a NNF node is required to minimize the impact of the outage. When a Rabbit node is fenced, the corresponding DWS Storage resource (storages.dws.cray.hpe.com
) status changes. The workload manager must observe this change and follow the procedure below to recover from the fencing status.
- Observed the
storage.Status
changed and that storage.Status.RequiresReboot == True
- Set the
storage.Spec.State := Disabled
- Wait for a change to the Storage status
storage.Status.State == Disabled
- Reboot the NNF node
- Set the
storage.Spec.State := Enabled
- Wait for
storage.Status.State == Enabled
"},{"location":"guides/ha-cluster/readme/#compute-fencing","title":"Compute Fencing","text":"The Redfish fencing agent from ClusterLabs should be used for Compute nodes in the cluster. It is also included at https://github.com/NearNodeFlash/fence-agents, and can be built at the same time as the NNF fencing agent. Configure the agent with the following parameters:
Argument Definition ip=[ADDRESS]
The IP address or hostname of the HSS controller port=80
The Port of the HSS controller. Must be 80
systems-uri=/redfish/v1/Systems/1
The URI of the Systems object. Must be /redfish/v1/Systems/1
ssl-insecure=true
Instructs the use of an insecure SSL exchange. Must be true
username=[USER]
The user name for connecting to the HSS controller password=[PASSWORD]
the password for connecting to the HSS controller For example, setting up the Redfish fencing agent on rabbit-compute-2
with the redfish service at 192.168.0.1
. This needs to be run on one node in the cluster.
pcs stonith create rabbit-compute-2 fence_redfish pcmk_host_list=rabbit-compute-2 ip=192.168.0.1 systems-uri=/redfish/v1/Systems/1 username=root password=password ssl_insecure=true\n
"},{"location":"guides/ha-cluster/readme/#dummy-fencing","title":"Dummy Fencing","text":"The dummy fencing agent from ClusterLabs can be used for nodes in the cluster for an early access development system.
"},{"location":"guides/ha-cluster/readme/#configuring-a-gfs2-file-system-in-a-cluster","title":"Configuring a GFS2 file system in a cluster","text":"Follow steps 1-8 of the procedure from Red Hat: Configuring a GFS2 file system in a cluster.
"},{"location":"guides/initial-setup/readme/","title":"Initial Setup Instructions","text":"Instructions for the initial setup of a Rabbit are included in this document.
"},{"location":"guides/initial-setup/readme/#lvm-configuration-on-rabbit","title":"LVM Configuration on Rabbit","text":"LVM Details Running LVM commands (lvcreate/lvremove) on a Rabbit to create logical volumes is problematic if those commands run within a container. Rabbit Storage Orchestration code contained in the nnf-node-manager
Kubernetes pod executes LVM commands from within the container. The problem is that the LVM create/remove commands wait for a UDEV confirmation cookie that is set when UDEV rules run within the host OS. These cookies are not synchronized with the containers where the LVM commands execute.
3 options to solve this problem are:
- Disable UDEV sync at the host operating system level
- Disable UDEV sync using the
\u2013noudevsync
command option for each LVM command - Clear the UDEV cookie using the
dmsetup udevcomplete_all
command after the lvcreate/lvremove command.
Taking these in reverse order using option 3 above which allows UDEV settings within the host OS to remain unchanged from the default, one would need to start the dmsetup
command on a separate thread because the LVM create/remove command waits for the UDEV cookie. This opens too many error paths, so it was rejected.
Option 2 allows UDEV settings within the host OS to remain unchanged from the default, but the use of UDEV within production Rabbit systems is viewed as unnecessary because the host OS is PXE-booted onto the node vs loaded from an device that is discovered by UDEV.
Option 1 above is what we chose to implement because it is the simplest. The following sections discuss this setting.
In order for LVM commands to run within the container environment on a Rabbit, the following change is required to the /etc/lvm/lvm.conf
file on Rabbit.
sed -i 's/udev_sync = 1/udev_sync = 0/g' /etc/lvm/lvm.conf\n
"},{"location":"guides/initial-setup/readme/#zfs","title":"ZFS","text":"ZFS kernel module must be enabled to run on boot. This can be done by creating a file, zfs.conf
, containing the string \"zfs\" in your systems modules-load.d directory.
echo \"zfs\" > /etc/modules-load.d/zfs.conf\n
"},{"location":"guides/initial-setup/readme/#kubernetes-initial-setup","title":"Kubernetes Initial Setup","text":"Installation of Kubernetes (k8s) nodes proceeds by installing k8s components onto the master node(s) of the cluster, then installing k8s components onto the worker nodes and joining those workers to the cluster. The k8s cluster setup for Rabbit requires 3 distinct k8s node types for operation:
- Master: 1 or more master nodes which serve as the Kubernetes API server and control access to the system. For HA, at least 3 nodes should be dedicated to this role.
- Worker: 1 or more worker nodes which run the system level controller manager (SLCM) and Data Workflow Services (DWS) pods. In production, at least 3 nodes should be dedicated to this role.
- Rabbit: 1 or more Rabbit nodes which run the node level controller manager (NLCM) code. The NLCM daemonset pods are exclusively scheduled on Rabbit nodes. All Rabbit nodes are joined to the cluster as k8s workers, and they are tainted to restrict the type of work that may be scheduled on them. The NLCM pod has a toleration that allows it to run on the tainted (i.e. Rabbit) nodes.
"},{"location":"guides/initial-setup/readme/#kubernetes-node-labels","title":"Kubernetes Node Labels","text":"Node Type Node Label Generic Kubernetes Worker Node cray.nnf.manager=true Rabbit Node cray.nnf.node=true"},{"location":"guides/initial-setup/readme/#kubernetes-node-taints","title":"Kubernetes Node Taints","text":"Node Type Node Label Rabbit Node cray.nnf.node=true:NoSchedule See Taints and Tolerations. The SystemConfiguration controller will handle node taints and labels for the rabbit nodes based on the contents of the SystemConfiguration resource described below.
"},{"location":"guides/initial-setup/readme/#rabbit-system-configuration","title":"Rabbit System Configuration","text":"The SystemConfiguration Custom Resource Definition (CRD) is a DWS resource that describes the hardware layout of the whole system. It is expected that an administrator creates a single SystemConfiguration resource when the system is being set up. There is no need to update the SystemConfiguration resource unless hardware is added to or removed from the system.
System Configuration Details Rabbit software looks for a SystemConfiguration named default
in the default
namespace. This resource contains a list of compute nodes and storage nodes, and it describes the mapping between them. There are two different consumers of the SystemConfiguration resource in the NNF software:
NnfNodeReconciler
- The reconciler for the NnfNode resource running on the Rabbit nodes reads the SystemConfiguration resource. It uses the Storage to compute mapping information to fill in the HostName section of the NnfNode resource. This information is then used to populate the DWS Storage resource.
NnfSystemConfigurationReconciler
- This reconciler runs in the nnf-controller-manager
. It creates a Namespace for each compute node listed in the SystemConfiguration. These namespaces are used by the client mount code.
Here is an example SystemConfiguration
:
Spec Section Notes computeNodes List of names of compute nodes in the system storageNodes List of Rabbits and the compute nodes attached storageNodes[].type Must be \"Rabbit\" storageNodes[].computeAccess List of {slot, compute name} elements that indicate physical slot index that the named compute node is attached to apiVersion: dataworkflowservices.github.io/v1alpha2\nkind: SystemConfiguration\nmetadata:\n name: default\n namespace: default\nspec:\n computeNodes:\n - name: compute-01\n - name: compute-02\n - name: compute-03\n - name: compute-04\n ports:\n - 5000-5999\n portsCooldownInSeconds: 0\n storageNodes:\n - computesAccess:\n - index: 0\n name: compute-01\n - index: 1\n name: compute-02\n - index: 6\n name: compute-03\n name: rabbit-name-01\n type: Rabbit\n - computesAccess:\n - index: 4\n name: compute-04\n name: rabbit-name-02\n type: Rabbit\n
"},{"location":"guides/node-management/drain/","title":"Disable Or Drain A Node","text":""},{"location":"guides/node-management/drain/#disabling-a-node","title":"Disabling a node","text":"A Rabbit node can be manually disabled, indicating to the WLM that it should not schedule more jobs on the node. Jobs currently on the node will be allowed to complete at the discretion of the WLM.
Disable a node by setting its Storage state to Disabled
.
kubectl patch storage $NODE --type=json -p '[{\"op\":\"replace\", \"path\":\"/spec/state\", \"value\": \"Disabled\"}]'\n
When the Storage is queried by the WLM, it will show the disabled status.
$ kubectl get storages\nNAME STATE STATUS MODE AGE\nkind-worker2 Enabled Ready Live 10m\nkind-worker3 Disabled Disabled Live 10m\n
To re-enable a node, set its Storage state to Enabled
.
kubectl patch storage $NODE --type=json -p '[{\"op\":\"replace\", \"path\":\"/spec/state\", \"value\": \"Enabled\"}]'\n
The Storage state will show that it is enabled.
kubectl get storages\nNAME STATE STATUS MODE AGE\nkind-worker2 Enabled Ready Live 10m\nkind-worker3 Enabled Ready Live 10m\n
"},{"location":"guides/node-management/drain/#draining-a-node","title":"Draining a node","text":"The NNF software consists of a collection of DaemonSets and Deployments. The pods on the Rabbit nodes are usually from DaemonSets. Because of this, the kubectl drain
command is not able to remove the NNF software from a node. See Safely Drain a Node for details about the limitations posed by DaemonSet pods.
Given the limitations of DaemonSets, the NNF software will be drained by using taints, as described in Taints and Tolerations.
This would be used only after the WLM jobs have been removed from that Rabbit (preferably) and there is some reason to also remove the NNF software from it. This might be used before a Rabbit is powered off and pulled out of the cabinet, for example, to avoid leaving pods in \"Terminating\" state (harmless, but it's noise).
If an admin used this taint before power-off it would mean there wouldn't be \"Terminating\" pods lying around for that Rabbit. After a new/same Rabbit is put back in its place, the NNF software won't jump back on it while the taint is present. The taint can be removed at any time, from immediately after the node is powered off up to some time after the new/same Rabbit is powered back on.
"},{"location":"guides/node-management/drain/#drain-nnf-pods-from-a-rabbit-node","title":"Drain NNF pods from a rabbit node","text":"Drain the NNF software from a node by applying the cray.nnf.node.drain
taint. The CSI driver pods will remain on the node to satisfy any unmount requests from k8s as it cleans up the NNF pods.
kubectl taint node $NODE cray.nnf.node.drain=true:NoSchedule cray.nnf.node.drain=true:NoExecute\n
This will cause the node's Storage
resource to be drained:
$ kubectl get storages\nNAME STATE STATUS MODE AGE\nkind-worker2 Enabled Drained Live 5m44s\nkind-worker3 Enabled Ready Live 5m45s\n
The Storage
resource will contain the following message indicating the reason it has been drained:
$ kubectl get storages rabbit1 -o json | jq -rM .status.message\nKubernetes node is tainted with cray.nnf.node.drain\n
To restore the node to service, remove the cray.nnf.node.drain
taint.
kubectl taint node $NODE cray.nnf.node.drain-\n
The Storage
resource will revert to a Ready
status.
"},{"location":"guides/node-management/drain/#the-csi-driver","title":"The CSI driver","text":"While the CSI driver pods may be drained from a Rabbit node, it is inadvisable to do so.
Warning K8s relies on the CSI driver to unmount any filesystems that may have been mounted into a pod's namespace. If it is not present when k8s is attempting to remove a pod then the pod may be left in \"Terminating\" state. This is most obvious when draining the nnf-dm-worker
pods which usually have filesystems mounted in them.
Drain the CSI driver pod from a node by applying the cray.nnf.node.drain.csi
taint.
kubectl taint node $NODE cray.nnf.node.drain.csi=true:NoSchedule cray.nnf.node.drain.csi=true:NoExecute\n
To restore the CSI driver pods to that node, remove the cray.nnf.node.drain.csi
taint.
kubectl taint node $NODE cray.nnf.node.drain.csi-\n
This taint will also drain the remaining NNF software if has not already been drained by the cray.nnf.node.drain
taint.
"},{"location":"guides/node-management/nvme-namespaces/","title":"Debugging NVMe Namespaces","text":""},{"location":"guides/node-management/nvme-namespaces/#total-space-available-or-used","title":"Total Space Available or Used","text":"Find the total space available, and the total space used, on a Rabbit node using the Redfish API. One way to access the API is to use the nnf-node-manager
pod on that node.
To view the space on node ee50, find its nnf-node-manager
pod and then exec into it to query the Redfish API:
[richerso@ee1:~]$ kubectl get pods -A -o wide | grep ee50 | grep node-manager\nnnf-system nnf-node-manager-jhglm 1/1 Running 0 61m 10.85.71.11 ee50 <none> <none>\n
Then query the Redfish API to view the AllocatedBytes
and GuaranteedBytes
:
[richerso@ee1:~]$ kubectl exec --stdin --tty -n nnf-system nnf-node-manager-jhglm -- curl -S localhost:50057/redfish/v1/StorageServices/NNF/CapacitySource | jq\n{\n \"@odata.id\": \"/redfish/v1/StorageServices/NNF/CapacitySource\",\n \"@odata.type\": \"#CapacitySource.v1_0_0.CapacitySource\",\n \"Id\": \"0\",\n \"Name\": \"Capacity Source\",\n \"ProvidedCapacity\": {\n \"Data\": {\n \"AllocatedBytes\": 128849888,\n \"ConsumedBytes\": 128849888,\n \"GuaranteedBytes\": 307132496928,\n \"ProvisionedBytes\": 307261342816\n },\n \"Metadata\": {},\n \"Snapshot\": {}\n },\n \"ProvidedClassOfService\": {},\n \"ProvidingDrives\": {},\n \"ProvidingPools\": {},\n \"ProvidingVolumes\": {},\n \"Actions\": {},\n \"ProvidingMemory\": {},\n \"ProvidingMemoryChunks\": {}\n}\n
"},{"location":"guides/node-management/nvme-namespaces/#total-orphaned-or-leaked-space","title":"Total Orphaned or Leaked Space","text":"To determine the amount of orphaned space, look at the Rabbit node when there are no allocations on it. If there are no allocations then there should be no NnfNodeBlockStorages
in the k8s namespace with the Rabbit's name:
[richerso@ee1:~]$ kubectl get nnfnodeblockstorage -n ee50\nNo resources found in ee50 namespace.\n
To check that there are no orphaned namespaces, you can use the nvme command while logged into that Rabbit node:
[root@ee50:~]# nvme list\nNode SN Model Namespace Usage Format FW Rev\n--------------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------\n/dev/nvme0n1 S666NN0TB11877 SAMSUNG MZ1L21T9HCLS-00A07 1 8.57 GB / 1.92 TB 512 B + 0 B GDC7302Q\n
There should be no namespaces on the kioxia drives:
[root@ee50:~]# nvme list | grep -i kioxia\n[root@ee50:~]#\n
If there are namespaces listed, and there weren't any NnfNodeBlockStorages
on the node, then they need to be deleted through the Rabbit software. The NnfNodeECData
resource is a persistent data store for the allocations that should exist on the Rabbit. By deleting it, and then deleting the nnf-node-manager pod, it causes nnf-node-manager to delete the orphaned namespaces. This can take a few minutes after you actually delete the pod:
kubectl delete nnfnodeecdata ec-data -n ee50\nkubectl delete pod -n nnf-system nnf-node-manager-jhglm\n
"},{"location":"guides/rbac-for-users/readme/","title":"RBAC: Role-Based Access Control","text":"RBAC (Role Based Access Control) determines the operations a user or service can perform on a list of Kubernetes resources. RBAC affects everything that interacts with the kube-apiserver (both users and services internal or external to the cluster). More information about RBAC can be found in the Kubernetes documentation.
"},{"location":"guides/rbac-for-users/readme/#rbac-for-users","title":"RBAC for Users","text":"This section shows how to create a kubeconfig file with RBAC set up to restrict access to view only for resources.
"},{"location":"guides/rbac-for-users/readme/#overview","title":"Overview","text":"User access to a Kubernetes cluster is defined through a kubeconfig file. This file contains the address of the kube-apiserver as well as the key and certificate for the user. Typically this file is located in ~/.kube/config
. When a kubernetes cluster is created, a config file is generated for the admin that allows unrestricted access to all resources in the cluster. This is the equivalent of root
on a Linux system.
The goal of this document is to create a new kubeconfig file that allows view only access to Kubernetes resources. This kubeconfig file can be shared between the HPE employees to investigate issues on the system. This involves:
- Generating a new key/cert pair for an \"hpe\" user
- Creating a new kubeconfig file
- Adding RBAC rules for the \"hpe\" user to allow read access
"},{"location":"guides/rbac-for-users/readme/#generate-a-key-and-certificate","title":"Generate a Key and Certificate","text":"The first step is to create a new key and certificate so that HPE employees can authenticate as the \"hpe\" user. This will likely be done on one of the master nodes. The openssl
command needs access to the certificate authority file. This is typically located in /etc/kubernetes/pki
.
# make a temporary work space\nmkdir /tmp/rabbit\ncd /tmp/rabbit\n\n# Create this user\nexport USERNAME=hpe\n\n# generate a new key\nopenssl genrsa -out rabbit.key 2048\n\n# create a certificate signing request for this user\nopenssl req -new -key rabbit.key -out rabbit.csr -subj \"/CN=$USERNAME\"\n\n# generate a certificate using the certificate authority on the k8s cluster. This certificate lasts 500 days\nopenssl x509 -req -in rabbit.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out rabbit.crt -days 500\n
"},{"location":"guides/rbac-for-users/readme/#create-a-kubeconfig","title":"Create a kubeconfig","text":"After the keys have been generated, a new kubeconfig file can be created for this user. The admin kubeconfig /etc/kubernetes/admin.conf
can be used to determine the cluster name kube-apiserver address.
# create a new kubeconfig with the server information\nkubectl config set-cluster $CLUSTER_NAME --kubeconfig=/tmp/rabbit/rabbit.conf --server=$SERVER_ADDRESS --certificate-authority=/etc/kubernetes/pki/ca.crt --embed-certs=true\n\n# add the key and cert for this user to the config\nkubectl config set-credentials $USERNAME --kubeconfig=/tmp/rabbit/rabbit.conf --client-certificate=/tmp/rabbit/rabbit.crt --client-key=/tmp/rabbit/rabbit.key --embed-certs=true\n\n# add a context\nkubectl config set-context $USERNAME --kubeconfig=/tmp/rabbit/rabbit.conf --cluster=$CLUSTER_NAME --user=$USERNAME\n
The kubeconfig file should be placed in a location where HPE employees have read access to it.
"},{"location":"guides/rbac-for-users/readme/#create-clusterrole-and-clusterrolebinding","title":"Create ClusterRole and ClusterRoleBinding","text":"The next step is to create ClusterRole and ClusterRoleBinding resources. The ClusterRole provided allows viewing all cluster and namespace scoped resources, but disallows creating, deleting, or modifying any resources.
ClusterRole
apiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRole\nmetadata:\n name: hpe-viewer\nrules:\n - apiGroups: [ \"*\" ]\n resources: [ \"*\" ]\n verbs: [ get, list ]\n
ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: hpe-viewer\nsubjects:\n- kind: User\n name: hpe\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n kind: ClusterRole\n name: hpe-viewer\n apiGroup: rbac.authorization.k8s.io\n
Both of these resources can be created using the kubectl apply
command.
"},{"location":"guides/rbac-for-users/readme/#testing","title":"Testing","text":"Get, List, Create, Delete, and Modify operations can be tested as the \"hpe\" user by setting the KUBECONFIG environment variable to use the new kubeconfig file. Get and List should be the only allowed operations. Other operations should fail with a \"forbidden\" error.
export KUBECONFIG=/tmp/hpe/hpe.conf\n
"},{"location":"guides/rbac-for-users/readme/#rbac-for-workload-manager-wlm","title":"RBAC for Workload Manager (WLM)","text":"Note This section assumes the reader has read and understood the steps described above for setting up RBAC for Users
.
A workload manager (WLM) such as Flux or Slurm will interact with DataWorkflowServices as a privileged user. RBAC is used to limit the operations that a WLM can perform on a Rabbit system.
The following steps are required to create a user and a role for the WLM. In this case, we're creating a user to be used with the Flux WLM:
- Generate a new key/cert pair for a \"flux\" user
- Creating a new kubeconfig file
- Adding RBAC rules for the \"flux\" user to allow appropriate access to the DataWorkflowServices API.
"},{"location":"guides/rbac-for-users/readme/#generate-a-key-and-certificate_1","title":"Generate a Key and Certificate","text":"Generate a key and certificate for our \"flux\" user, similar to the way we created one for the \"hpe\" user above. Substitute \"flux\" in place of \"hpe\".
"},{"location":"guides/rbac-for-users/readme/#create-a-kubeconfig_1","title":"Create a kubeconfig","text":"After the keys have been generated, a new kubeconfig file can be created for the \"flux\" user, similar to the one for the \"hpe\" user above. Again, substitute \"flux\" in place of \"hpe\".
"},{"location":"guides/rbac-for-users/readme/#use-the-provided-clusterrole-and-create-a-clusterrolebinding","title":"Use the provided ClusterRole and create a ClusterRoleBinding","text":"DataWorkflowServices has already defined the role to be used with WLMs, named dws-workload-manager
:
kubectl get clusterrole dws-workload-manager\n
If the \"flux\" user requires only the normal WLM permissions, then create and apply a ClusterRoleBinding to associate the \"flux\" user with the dws-workload-manager
ClusterRole.
The `dws-workload-manager role is defined in workload_manager_role.yaml.
ClusterRoleBinding for WLM permissions only:
apiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: flux\nsubjects:\n- kind: User\n name: flux\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n kind: ClusterRole\n name: dws-workload-manager\n apiGroup: rbac.authorization.k8s.io\n
If the \"flux\" user requires the normal WLM permissions as well as some of the NNF permissions, perhaps to collect some NNF resources for debugging, then create and apply a ClusterRoleBinding to associate the \"flux\" user with the nnf-workload-manager
ClusterRole.
The nnf-workload-manager
role is defined in workload_manager_nnf_role.yaml.
ClusterRoleBinding for WLM and NNF permissions:
apiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n name: flux\nsubjects:\n- kind: User\n name: flux\n apiGroup: rbac.authorization.k8s.io\nroleRef:\n kind: ClusterRole\n name: nnf-workload-manager\n apiGroup: rbac.authorization.k8s.io\n
The WLM should then use the kubeconfig file associated with this \"flux\" user to access the DataWorkflowServices API and the Rabbit system.
"},{"location":"guides/storage-profiles/readme/","title":"Storage Profile Overview","text":"Storage Profiles allow for customization of the Rabbit storage provisioning process. Examples of content that can be customized via storage profiles is
- The RAID type used for storage
- Any mkfs or LVM args used
- An external MGS NID for Lustre
- A boolean value indicating the Lustre MGT and MDT should be combined on the same target device
DW directives that allocate storage on Rabbit nodes allow a profile
parameter to be specified to control how the storage is configured. NNF software provides a set of canned profiles to choose from, and the administrator may create more profiles.
The administrator shall choose one profile to be the default profile that is used when a profile parameter is not specified.
"},{"location":"guides/storage-profiles/readme/#specifying-a-profile","title":"Specifying a Profile","text":"To specify a profile name on a #DW directive, use the profile
option
#DW jobdw type=lustre profile=durable capacity=5GB name=example\n
"},{"location":"guides/storage-profiles/readme/#setting-a-default-profile","title":"Setting A Default Profile","text":"A default profile must be defined at all times. Any #DW line that does not specify a profile will use the default profile. If a default profile is not defined, then any new workflows will be rejected. If more than one profile is marked as default then any new workflows will be rejected.
To query existing profiles
$ kubectl get nnfstorageprofiles -A\nNAMESPACE NAME DEFAULT AGE\nnnf-system durable true 14s\nnnf-system performance false 6s\n
To set the default flag on a profile
$ kubectl patch nnfstorageprofile performance -n nnf-system --type merge -p '{\"data\":{\"default\":true}}'\n
To clear the default flag on a profile
$ kubectl patch nnfstorageprofile durable -n nnf-system --type merge -p '{\"data\":{\"default\":false}}'\n
"},{"location":"guides/storage-profiles/readme/#creating-the-initial-default-profile","title":"Creating The Initial Default Profile","text":"Create the initial default profile from scratch or by using the NnfStorageProfile/template resource as a template. If nnf-deploy
was used to install nnf-sos then the default profile described below will have been created automatically.
To use the template
resource begin by obtaining a copy of it either from the nnf-sos repo or from a live system. To get it from a live system use the following command:
kubectl get nnfstorageprofile -n nnf-system template -o yaml > profile.yaml\n
Edit the profile.yaml
file to trim the metadata section to contain only a name and namespace. The namespace must be left as nnf-system, but the name should be set to signify that this is the new default profile. In this example we will name it default
. The metadata section will look like the following, and will contain no other fields:
metadata:\n name: default\n namespace: nnf-system\n
Mark this new profile as the default profile by setting default: true
in the data section of the resource:
data:\n default: true\n
Apply this resource to the system and verify that it is the only one marked as the default resource:
kubectl get nnfstorageprofile -A\n
The output will appear similar to the following:
NAMESPACE NAME DEFAULT AGE\nnnf-system default true 9s\nnnf-system template false 11s\n
The administrator should edit the default
profile to record any cluster-specific settings. Maintain a copy of this resource YAML in a safe place so it isn't lost across upgrades.
"},{"location":"guides/storage-profiles/readme/#keeping-the-default-profile-updated","title":"Keeping The Default Profile Updated","text":"An upgrade of nnf-sos may include updates to the template
profile. It may be necessary to manually copy these updates into the default
profile.
"},{"location":"guides/storage-profiles/readme/#profile-parameters","title":"Profile Parameters","text":""},{"location":"guides/storage-profiles/readme/#xfs","title":"XFS","text":"The following shows how to specify command line options for pvcreate, vgcreate, lvcreate, and mkfs for XFS storage. Optional mount options are specified one per line
apiVersion: nnf.cray.hpe.com/v1alpha1\nkind: NnfStorageProfile\nmetadata:\n name: xfs-stripe-example\n namespace: nnf-system\ndata:\n[...]\n xfsStorage:\n commandlines:\n pvCreate: $DEVICE\n vgCreate: $VG_NAME $DEVICE_LIST\n lvCreate: -l 100%VG --stripes $DEVICE_NUM --stripesize=32KiB --name $LV_NAME $VG_NAME\n mkfs: $DEVICE\n options:\n mountRabbit:\n - noatime\n - nodiratime\n[...]\n
"},{"location":"guides/storage-profiles/readme/#gfs2","title":"GFS2","text":"The following shows how to specify command line options for pvcreate, lvcreate, and mkfs for GFS2.
apiVersion: nnf.cray.hpe.com/v1alpha1\nkind: NnfStorageProfile\nmetadata:\n name: gfs2-stripe-example\n namespace: nnf-system\ndata:\n[...]\n gfs2Storage:\n commandlines:\n pvCreate: $DEVICE\n vgCreate: $VG_NAME $DEVICE_LIST\n lvCreate: -l 100%VG --stripes $DEVICE_NUM --stripesize=32KiB --name $LV_NAME $VG_NAME\n mkfs: -j2 -p $PROTOCOL -t $CLUSTER_NAME:$LOCK_SPACE $DEVICE\n[...]\n
"},{"location":"guides/storage-profiles/readme/#lustre-zfs","title":"Lustre / ZFS","text":"The following shows how to specify a zpool virtual device (vdev). In this case the default vdev is a stripe. See zpoolconcepts(7) for virtual device descriptions.
apiVersion: nnf.cray.hpe.com/v1alpha1\nkind: NnfStorageProfile\nmetadata:\n name: zpool-stripe-example\n namespace: nnf-system\ndata:\n[...]\n lustreStorage:\n mgtCommandlines:\n zpoolCreate: -O canmount=off -o cachefile=none $POOL_NAME $DEVICE_LIST\n mkfs: --mgs $VOL_NAME\n mdtCommandlines:\n zpoolCreate: -O canmount=off -o cachefile=none $POOL_NAME $DEVICE_LIST\n mkfs: --mdt --fsname=$FS_NAME --mgsnode=$MGS_NID --index=$INDEX $VOL_NAME\n mgtMdtCommandlines:\n zpoolCreate: -O canmount=off -o cachefile=none $POOL_NAME $DEVICE_LIST\n mkfs: --mgs --mdt --fsname=$FS_NAME --index=$INDEX $VOL_NAME\n ostCommandlines:\n zpoolCreate: -O canmount=off -o cachefile=none $POOL_NAME $DEVICE_LIST\n mkfs: --ost --fsname=$FS_NAME --mgsnode=$MGS_NID --index=$INDEX $VOL_NAME\n[...]\n
"},{"location":"guides/storage-profiles/readme/#zfs-dataset-properties","title":"ZFS dataset properties","text":"The following shows how to specify ZFS dataset properties in the --mkfsoptions
arg for mkfs.lustre. See zfsprops(7).
apiVersion: nnf.cray.hpe.com/v1alpha1\nkind: NnfStorageProfile\nmetadata:\n name: zpool-stripe-example\n namespace: nnf-system\ndata:\n[...]\n lustreStorage:\n[...]\n ostCommandlines:\n zpoolCreate: -O canmount=off -o cachefile=none $POOL_NAME $DEVICE_LIST\n mkfs: --ost --mkfsoptions=\"recordsize=1024K -o compression=lz4\" --fsname=$FS_NAME --mgsnode=$MGS_NID --index=$INDEX $VOL_NAME\n[...]\n
"},{"location":"guides/storage-profiles/readme/#mount-options-for-targets","title":"Mount Options for Targets","text":""},{"location":"guides/storage-profiles/readme/#persistent-mount-options","title":"Persistent Mount Options","text":"Use the mkfs.lustre --mountfsoptions
parameter to set persistent mount options for Lustre targets.
apiVersion: nnf.cray.hpe.com/v1alpha1\nkind: NnfStorageProfile\nmetadata:\n name: target-mount-option-example\n namespace: nnf-system\ndata:\n[...]\n lustreStorage:\n[...]\n ostCommandlines:\n zpoolCreate: -O canmount=off -o cachefile=none $POOL_NAME $DEVICE_LIST\n mkfs: --ost --mountfsoptions=\"errors=remount-ro,mballoc\" --mkfsoptions=\"recordsize=1024K -o compression=lz4\" --fsname=$FS_NAME --mgsnode=$MGS_NID --index=$INDEX $VOL_NAME\n[...]\n
"},{"location":"guides/storage-profiles/readme/#non-persistent-mount-options","title":"Non-Persistent Mount Options","text":"Non-persistent mount options can be specified with the ostOptions.mountTarget parameter to the NnfStorageProfile:
apiVersion: nnf.cray.hpe.com/v1alpha1\nkind: NnfStorageProfile\nmetadata:\n name: target-mount-option-example\n namespace: nnf-system\ndata:\n[...]\n lustreStorage:\n[...]\n ostCommandlines:\n zpoolCreate: -O canmount=off -o cachefile=none $POOL_NAME $DEVICE_LIST\n mkfs: --ost --mountfsoptions=\"errors=remount-ro\" --mkfsoptions=\"recordsize=1024K -o compression=lz4\" --fsname=$FS_NAME --mgsnode=$MGS_NID --index=$INDEX $VOL_NAME\n ostOptions:\n mountTarget:\n - mballoc\n[...]\n
"},{"location":"guides/storage-profiles/readme/#target-layout","title":"Target Layout","text":"Users may want Lustre file systems with different performance characteristics. For example, a user job with a single compute node accessing the Lustre file system would see acceptable performance from a single OSS. An FPP workload might want as many OSSs as posible to avoid contention.
The NnfStorageProfile
allows admins to specify where and how many Lustre targets are allocated by the WLM. During the proposal phase of the workflow, the NNF software uses the information in the NnfStorageProfile
to add extra constraints in the DirectiveBreakdown
. The WLM uses these constraints when picking storage.
The NnfStorageProfile
has three fields in the mgtOptions
, mdtOptions
, and ostOptions
to specify target layout. The fields are:
count
- A static value for how many Lustre targets to create. scale
- A value from 1-10 that the WLM can use to determine how many Lustre targets to allocate. This is up to the WLM and the admins to agree on how to interpret this field. A value of 1 might indicate the minimum number of NNF nodes needed to reach the minimum capacity, while 10 might result in a Lustre target on every Rabbit attached to the computes in the job. Scale takes into account allocation size, compute node count, and Rabbit count. colocateComputes
- true/false value. When \"true\", this adds a location constraint in the DirectiveBreakdown
that limits the WLM to picking storage with a physical connection to the compute resources. In practice this means that Rabbit storage is restricted to the chassis used by the job. This can be set individually for each of the Lustre target types. When this is \"false\", any Rabbit storage can be picked, even if the Rabbit doesn't share a chassis with any of the compute nodes in the job.
Only one of scale
and count
can be set for a particular target type.
The DirectiveBreakdown
for create_persistent
#DWs won't include the constraint from colocateCompute=true
since there may not be any compute nodes associated with the job.
apiVersion: nnf.cray.hpe.com/v1alpha1\nkind: NnfStorageProfile\nmetadata:\n name: high-metadata\n namespace: default\ndata:\n default: false\n...\n lustreStorage:\n combinedMgtMdt: false\n capacityMdt: 500GiB\n capacityMgt: 1GiB\n[...]\n ostOptions:\n scale: 5\n colocateComputes: true\n mdtOptions:\n count: 10\n
"},{"location":"guides/storage-profiles/readme/#example-layouts","title":"Example Layouts","text":"scale
with colocateComputes=true
will likely be the most common layout type to use for jobdw
directives. This will result in a Lustre file system whose performance scales with the number of compute nodes in the job.
count
may be used when a specific performance characteristic is desired such as a single shared file workload that has low metadata requirements and only needs a single MDT. It may also be useful when a consistently performing file system is required across different jobs.
colocatedComputes=false
may be useful for placing MDTs on NNF nodes without an OST (within the same file system).
The count
field may be useful when creating a persistent file system since the job with the create_persistent
directive may only have a single compute node.
In general, scale
gives a simple way for users to get a filesystem that has performance consistent with their job size. count
is useful for times when a user wants full control of the file system layout.
"},{"location":"guides/storage-profiles/readme/#command-line-variables","title":"Command Line Variables","text":""},{"location":"guides/storage-profiles/readme/#pvcreate","title":"pvcreate","text":" $DEVICE
- expands to the /dev/<path>
value for one device that has been allocated
"},{"location":"guides/storage-profiles/readme/#vgcreate","title":"vgcreate","text":" $VG_NAME
- expands to a volume group name that is controlled by Rabbit software. $DEVICE_LIST
- expands to a list of space-separated /dev/<path>
devices. This list will contain the devices that were iterated over for the pvcreate step.
"},{"location":"guides/storage-profiles/readme/#lvcreate","title":"lvcreate","text":" $VG_NAME
- see vgcreate above. $LV_NAME
- expands to a logical volume name that is controlled by Rabbit software. $DEVICE_NUM
- expands to a number indicating the number of devices allocated for the volume group. $DEVICE1, $DEVICE2, ..., $DEVICEn
- each expands to one of the devices from the $DEVICE_LIST
above.
"},{"location":"guides/storage-profiles/readme/#xfs-mkfs","title":"XFS mkfs","text":" $DEVICE
- expands to the /dev/<path>
value for the logical volume that was created by the lvcreate step above.
"},{"location":"guides/storage-profiles/readme/#gfs2-mkfs","title":"GFS2 mkfs","text":" $DEVICE
- expands to the /dev/<path>
value for the logical volume that was created by the lvcreate step above. $CLUSTER_NAME
- expands to a cluster name that is controlled by Rabbit Software $LOCK_SPACE
- expands to a lock space key that is controlled by Rabbit Software. $PROTOCOL
- expands to a locking protocol that is controlled by Rabbit Software.
"},{"location":"guides/storage-profiles/readme/#zpool-create","title":"zpool create","text":" $DEVICE_LIST
- expands to a list of space-separated /dev/<path>
devices. This list will contain the devices that were allocated for this storage request. $POOL_NAME
- expands to a pool name that is controlled by Rabbit software. $DEVICE_NUM
- expands to a number indicating the number of devices allocated for this storage request. $DEVICE1, $DEVICE2, ..., $DEVICEn
- each expands to one of the devices from the $DEVICE_LIST
above.
"},{"location":"guides/storage-profiles/readme/#lustre-mkfs","title":"lustre mkfs","text":" $FS_NAME
- expands to the filesystem name that was passed to Rabbit software from the workflow's #DW line. $MGS_NID
- expands to the NID of the MGS. If the MGS was orchestrated by nnf-sos then an appropriate internal value will be used. $POOL_NAME
- see zpool create above. $VOL_NAME
- expands to the volume name that will be created. This value will be <pool_name>/<dataset>
, and is controlled by Rabbit software. $INDEX
- expands to the index value of the target and is controlled by Rabbit software.
"},{"location":"guides/system-storage/readme/","title":"System Storage","text":""},{"location":"guides/system-storage/readme/#background","title":"Background","text":"System storage allows an admin to configure Rabbit storage without a DWS workflow. This is useful for making storage that is outside the scope of any job. One use case for system storage is to create a pair of LVM VGs on the Rabbit nodes that can be used to work around an lvmlockd
bug. The lockspace for the VGs can be started on the compute nodes, holding the lvm_global
lock open while other Rabbit VG lockspaces are started and stopped.
"},{"location":"guides/system-storage/readme/#nnfsystemstorage-resource","title":"NnfSystemStorage Resource","text":"System storage is created through the NnfSystemStorage
resource. By default, system storage creates an allocation on all Rabbits in the system and exposes the storage to all computes. This behavior can be modified through different fields in the NnfSystemStorage
resource. A NnfSystemStorage
storage resource has the following fields in its Spec
section:
Field Required Default Value Notes SystemConfiguration
No Empty ObjectReference
to the SystemConfiguration
to use By default, the default
/default
SystemConfiguration
is used IncludeRabbits
No Empty A list of Rabbit node names Rather than use all the Rabbits in the SystemConfiguration
, only use the Rabbits contained in this list ExcludeRabbits
No Empty A list of Rabbit node names Use all the Rabbits in the SystemConfiguration
except those contained in this list. IncludeComputes
No Empty A list of compute node names Rather than use the SystemConfiguration
to determine which computes are attached to the Rabbit nodes being used, only use the compute nodes contained in this list ExcludeComputes
No Empty A list of compute node names Use the SystemConfiguration
to determine which computes are attached to the Rabbits being used, but omit the computes contained in this list ComputesTarget
Yes all
all
,even
,odd
,pattern
Only use certain compute nodes based on their index as determined from the SystemConfiguration
. all
uses all computes. even
uses computes with an even index. odd
uses computes with an odd index. pattern
uses computes with the indexes specified in Spec.ComputesPattern
ComputesPattern
No Empty A list of integers [0-15] If ComputesTarget
is pattern
, then the storage is made available on compute nodes with the indexes specified in this list. Capacity
Yes 1073741824
Integer Number of bytes to allocate per Rabbit Type
Yes raw
raw
, xfs
, gfs2
Type of file system to create on the Rabbit storage StorageProfile
Yes None ObjectReference
to an NnfStorageProfile
. This storage profile must be marked as pinned
MakeClientMounts
Yes false
Create ClientMount
resources to mount the storage on the compute nodes. If this is false
, then the devices are made available to the compute nodes without mounting the file system ClientMountPath
No None Path to mount the file system on the compute nodes NnfSystemResources
can be created in any namespace.
"},{"location":"guides/system-storage/readme/#example","title":"Example","text":"apiVersion: nnf.cray.hpe.com/v1alpha1\nkind: NnfSystemStorage\nmetadata:\n name: gfs2-systemstorage\n namespace: systemstorage\nspec:\n excludeRabbits:\n - \"rabbit-1\"\n - \"rabbit-9\"\n - \"rabbit-14\"\n excludeComputes:\n - \"compute-32\"\n - \"compute-49\"\n type: \"gfs2\"\n capacity: 10000000000\n computesTarget: \"pattern\"\n computesPattern:\n - 0\n - 1\n - 2\n - 3\n - 4\n - 5\n - 6\n - 7\n makeClientMounts: true\n clientMountPath: \"/mnt/nnf/gfs2\"\n storageProfile:\n name: gfs2-systemstorage\n namespace: systemstorage\n kind: NnfStorageProfile\n
"},{"location":"guides/system-storage/readme/#lvmlockd-workaround","title":"lvmlockd Workaround","text":"System storage can be used to workaround an lvmlockd
bug that occurs when trying to start the lvm_global
lockspace. The lvm_global
lockspace is started only when there is a volume group lockspace that is started. After the last volume group lockspace is stopped, then the lvm_global
lockspace is stopped as well. To prevent the lvm_global
lockspace from being started and stopped so often, a volume group is created on the Rabbits and shared with the computes. The compute nodes can start the volume group lockspace and leave it open.
The system storage can also be used to check whether the PCIe cables are attached correctly between the Rabbit and compute nodes. If the cables are incorrect, then the PCIe switch will make NVMe namespaces available to the wrong compute node. An incorrect cable can only result in compute nodes that have PCIe connections switched with the other compute node in its pair. By creating two system storages, one for compute nodes with an even index, and one for compute nodes with an odd index, the PCIe connection can be verified by checking that the correct system storage is visible on a compute node.
"},{"location":"guides/system-storage/readme/#example_1","title":"Example","text":"The following example resources show how to create two system storages to use for the lvmlockd
workaround. Each system storage creates a raw
allocation with a volume group but no logical volume. This is the minimum LVM set up needed to start a lockspace on the compute nodes. A NnfStorageProfile
is created for each of the system storages. The NnfStorageProfile
specifies a tag during the vgcreate
that is used to differentiate between the two VGs. These resources are created in the systemstorage
namespace, but they could be created in any namespace.
apiVersion: nnf.cray.hpe.com/v1alpha1\nkind: NnfStorageProfile\nmetadata:\n name: lvmlockd_even\n namespace: systemstorage\ndata:\n xfsStorage:\n capacityScalingFactor: \"1.0\"\n lustreStorage:\n capacityScalingFactor: \"1.0\"\n gfs2Storage:\n capacityScalingFactor: \"1.0\"\n default: false\n pinned: true\n rawStorage:\n capacityScalingFactor: \"1.0\"\n commandlines:\n pvCreate: $DEVICE\n pvRemove: $DEVICE\n sharedVg: true\n vgChange:\n lockStart: --lock-start $VG_NAME\n lockStop: --lock-stop $VG_NAME\n vgCreate: --shared --addtag lvmlockd_even $VG_NAME $DEVICE_LIST\n vgRemove: $VG_NAME\n---\napiVersion: nnf.cray.hpe.com/v1alpha1\nkind: NnfStorageProfile\nmetadata:\n name: lvmlockd_odd\n namespace: systemstorage\ndata:\n xfsStorage:\n capacityScalingFactor: \"1.0\"\n lustreStorage:\n capacityScalingFactor: \"1.0\"\n gfs2Storage:\n capacityScalingFactor: \"1.0\"\n default: false\n pinned: true\n rawStorage:\n capacityScalingFactor: \"1.0\"\n commandlines:\n pvCreate: $DEVICE\n pvRemove: $DEVICE\n sharedVg: true\n vgChange:\n lockStart: --lock-start $VG_NAME\n lockStop: --lock-stop $VG_NAME\n vgCreate: --shared --addtag lvmlockd_odd $VG_NAME $DEVICE_LIST\n vgRemove: $VG_NAME\n
Note that the NnfStorageProfile
resources are marked as default: false
and pinned: true
. This is required for NnfStorageProfiles
that are used for system storage. The commandLine
fields for LV commands are left empty so that no LV is created.
apiVersion: nnf.cray.hpe.com/v1alpha1\nkind: NnfSystemStorage\nmetadata:\n name: lvmlockd_even\n namespace: systemstorage\nspec:\n type: \"raw\"\n computesTarget: \"even\"\n makeClientMounts: false\n storageProfile:\n name: lvmlockd_even\n namespace: systemstorage\n kind: NnfStorageProfile\n---\napiVersion: nnf.cray.hpe.com/v1alpha1\nkind: NnfSystemStorage\nmetadata:\n name: lvmlockd_odd\n namespace: systemstorage\nspec:\n type: \"raw\"\n computesTarget: \"odd\"\n makeClientMounts: false\n storageProfile:\n name: lvmlockd_odd\n namespace: systemstorage\n kind: NnfStorageProfile\n
The two NnfSystemStorage
resources each target all of the Rabbits but a different set of compute nodes. This will result in each Rabbit having two VGs and each compute node having one VG.
After the NnfSystemStorage
resources are created, the Rabbit software will create the storage on the Rabbit nodes and make the LVM VG available to the correct compute nodes. At this point, the status.ready
field will be true
. If an error occurs, the .status.error
field will describe the error.
"},{"location":"guides/user-containers/readme/","title":"NNF User Containers","text":"NNF User Containers are a mechanism to allow user-defined containerized applications to be run on Rabbit nodes with access to NNF ephemeral and persistent storage.
"},{"location":"guides/user-containers/readme/#overview","title":"Overview","text":"Container workflows are orchestrated through the use of two components: Container Profiles and Container Directives. A Container Profile defines the container to be executed. Most importantly, it allows you to specify which NNF storages are accessible within the container and which container image to run. The containers are executed on the NNF nodes that are allocated to your container workflow. These containers can be executed in either of two modes: Non-MPI and MPI.
For Non-MPI applications, the image and command are launched across all the targeted NNF Nodes in a uniform manner. This is useful in simple applications, where non-distributed behavior is desired.
For MPI applications, a single launcher container serves as the point of contact, responsible for distributing tasks to various worker containers. Each of the NNF nodes targeted by the workflow receives its corresponding worker container. The focus of this documentation will be on MPI applications.
To see a full working example before diving into these docs, see Putting It All Together.
"},{"location":"guides/user-containers/readme/#before-creating-a-container-workflow","title":"Before Creating a Container Workflow","text":"Before creating a workflow, a working NnfContainerProfile
must exist. This profile is referenced in the container directive supplied with the workflow.
"},{"location":"guides/user-containers/readme/#container-profiles","title":"Container Profiles","text":"The author of a containerized application will work with the administrator to define a pod specification template for the container and to create an appropriate NnfContainerProfile
resource for the container. The image and tag for the user's container will be specified in the profile.
The image must be available in a registry that is available to your system. This could be docker.io, ghcr.io, etc., or a private registry. Note that for a private registry, some additional setup is required. See here for more info.
The image itself has a few requirements. See here for more info on building images.
New NnfContainerProfile
resources may be created by copying one of the provided example profiles from the nnf-system
namespace . The examples may be found by listing them with kubectl
:
kubectl get nnfcontainerprofiles -n nnf-system\n
The next few subsections provide an overview of the primary components comprising an NnfContainerProfile
. However, it's important to note that while these sections cover the key aspects, they don't encompass every single detail. For an in-depth understanding of the capabilities offered by container profiles, we recommend referring to the following resources:
- Type definition for
NnfContainerProfile
- Sample for
NnfContainerProfile
- Online Examples for
NnfContainerProfile
(same as kubectl get
above)
"},{"location":"guides/user-containers/readme/#container-storages","title":"Container Storages","text":"The Storages
defined in the profile allow NNF filesystems to be made available inside of the container. These storages need to be referenced in the container workflow unless they are marked as optional.
There are three types of storages available to containers:
- local non-persistent storage (created via
#DW jobdw
directives) - persistent storage (created via
#DW create_persistent
directives) - global lustre storage (defined by
LustreFilesystems
)
For local and persistent storage, only GFS2 and Lustre filesystems are supported. Raw and XFS filesystems cannot be mounted more than once, so they cannot be mounted inside of a container while also being mounted on the NNF node itself.
For each storage in the profile, the name must follow these patterns (depending on the storage type):
DW_JOB_<storage_name>
DW_PERSISTENT_<storage_name>
DW_GLOBAL_<storage_name>
<storage_name>
is provided by the user and needs to be a name compatible with Linux environment variables (so underscores must be used, not dashes), since the storage mount directories are provided to the container via environment variables.
This storage name is used in container workflow directives to reference the NNF storage name that defines the filesystem. Find more info on that in Creating a Container Workflow.
Storages may be deemed as optional
in a profile. If a storage is not optional, the storage name must be set to the name of an NNF filesystem name in the container workflow.
For global lustre, there is an additional field for pvcMode
, which must match the mode that is configured in the LustreFilesystem
resource that represents the global lustre filesystem. This defaults to ReadWriteMany
.
Example:
storages:\n - name: DW_JOB_foo_local_storage\n optional: false\n - name: DW_PERSISTENT_foo_persistent_storage\n optional: true\n - name: DW_GLOBAL_foo_global_lustre\n optional: true\n pvcMode: ReadWriteMany\n
"},{"location":"guides/user-containers/readme/#container-spec","title":"Container Spec","text":"As mentioned earlier, container workflows can be categorized into two types: MPI and Non-MPI. It's essential to choose and define only one of these types within the container profile. Regardless of the type chosen, the data structure that implements the specification is equipped with two \"standard\" resources that are distinct from NNF custom resources.
For Non-MPI containers, the specification utilizes the spec
resource. This is the standard Kubernetes PodSpec
that outlines the desired configuration for the pod.
For MPI containers, mpiSpec
is used. This custom resource, available through MPIJobSpec
from mpi-operator
, serves as a facilitator for executing MPI applications across worker containers. This resource can be likened to a wrapper around a PodSpec
, but users need to define a PodSpec
for both Launcher and Worker containers.
See the MPIJobSpec
definition for more details on what can be configured for an MPI application.
It's important to bear in mind that the NNF Software is designed to override specific values within the MPIJobSpec
for ensuring the desired behavior in line with NNF software requirements. To prevent complications, it's advisable not to delve too deeply into the specification. A few illustrative examples of fields that are overridden by the NNF Software include:
- Replicas
- RunPolicy.BackoffLimit
- Worker/Launcher.RestartPolicy
- SSHAuthMountPath
By keeping these considerations in mind and refraining from extensive alterations to the specification, you can ensure a smoother integration with the NNF Software and mitigate any potential issues that may arise.
Please see the Sample and Examples listed above for more detail on container Specs.
"},{"location":"guides/user-containers/readme/#container-ports","title":"Container Ports","text":"Container Profiles allow for ports to be reserved for a container workflow. numPorts
can be used to specify the number of ports needed for a container workflow. The ports are opened on each targeted NNF node and are accessible outside of the cluster. Users must know how to contact the specific NNF node. It is recommend that DNS entries are made for this purpose.
In the workflow, the allocated port numbers are made available via the NNF_CONTAINER_PORTS
environment variable.
The workflow requests this number of ports from the NnfPortManager
, which is responsible for managing the ports allocated to container workflows. This resource can be inspected to see which ports are allocated.
Once a port is assigned to a workflow, that port number becomes unavailable for use by any other workflow until it is released.
Note
The SystemConfiguration
must be configured to allow for a range of ports, otherwise container workflows will fail in the Setup
state due to insufficient resources. See SystemConfiguration Setup.
"},{"location":"guides/user-containers/readme/#systemconfiguration-setup","title":"SystemConfiguration Setup","text":"In order for container workflows to request ports from the NnfPortManager
, the SystemConfiguration
must be configured for a range of ports:
kind: SystemConfiguration\nmetadata:\n name: default\n namespace: default\nspec:\n # Ports is the list of ports available for communication between nodes in the\n # system. Valid values are single integers, or a range of values of the form\n # \"START-END\" where START is an integer value that represents the start of a\n # port range and END is an integer value that represents the end of the port\n # range (inclusive).\n ports:\n - 4000-4999\n # PortsCooldownInSeconds is the number of seconds to wait before a port can be\n # reused. Defaults to 60 seconds (to match the typical value for the kernel's\n # TIME_WAIT). A value of 0 means the ports can be reused immediately.\n # Defaults to 60s if not set.\n portsCooldownInSeconds: 60\n
ports
is empty by default, and must be set by an administrator.
Multiple port ranges can be specified in this list, as well as single integers. This must be a safe port range that does not interfere with the ephemeral port range of the Linux kernel. The range should also account for the estimated number of simultaneous users that are running container workflows.
Once a container workflow is done, the port is released and the NnfPortManager
will not allow reuse of the port until the amount of time specified by portsCooldownInSeconds
has elapsed. Then the port can be reused by another container workflow.
"},{"location":"guides/user-containers/readme/#restricting-to-user-id-or-group-id","title":"Restricting To User ID or Group ID","text":"New NnfContainerProfile resources may be restricted to a specific user ID or group ID . When a data.userID
or data.groupID
is specified in the profile, only those Workflow resources having a matching user ID or group ID will be allowed to use that profile . If the profile specifies both of these IDs, then the Workflow resource must match both of them.
"},{"location":"guides/user-containers/readme/#creating-a-container-workflow","title":"Creating a Container Workflow","text":"The user's workflow will specify the name of the NnfContainerProfile
in a DW directive. If the custom profile is named red-rock-slushy
then it will be specified in the #DW container
directive with the profile
parameter.
#DW container profile=red-rock-slushy [...]\n
Furthermore, to set the container storages for the workflow, storage parameters must also be supplied in the workflow. This is done using the <storage_name>
(see Container Storages) and setting it to the name of a storage directive that defines an NNF filesystem. That storage directive must already exist as part of another workflow (e.g. persistent storage) or it can be supplied in the same workflow as the container. For global lustre, the LustreFilesystem
must exist that represents the global lustre filesystem.
In this example, we're creating a GFS2 filesystem to accompany the container directive. We're using the red-rock-slushy
profile which contains a non-optional storage called DW_JOB_local_storage
:
kind: NnfContainerProfile\nmetadata:\n name: red-rock-slushy\ndata:\n storages:\n - name: DW_JOB_local_storage\n optional: false\n template:\n mpiSpec:\n ...\n
The resulting container directive looks like this:
#DW jobdw name=my-gfs2 type=gfs2 capacity=100GB\"\n#DW container name=my-container profile=red-rock-slushy DW_JOB_local_storage=my-gfs2\n
Once the workflow progresses, this will create a 100GB GFS2 filesystem that is then mounted into the container upon creation. An environment variable called DW_JOB_local_storage
is made available inside of the container and provides the path to the mounted NNF GFS2 filesystem. An application running inside of the container can then use this variable to get to the filesystem mount directory. See here.
Multiple storages can be defined in the container directives. Only one container directive is allowed per workflow.
Note
GFS2 filesystems have special considerations since the mount directory contains directories for every compute node. See GFS2 Index Mounts for more info.
"},{"location":"guides/user-containers/readme/#targeting-nodes","title":"Targeting Nodes","text":"For container directives, compute nodes must be assigned to the workflow. The NNF software will trace the compute nodes back to their local NNF nodes and the containers will be executed on those NNF nodes. The act of assigning compute nodes to your container workflow instructs the NNF software to select the NNF nodes that run the containers.
For the jobdw
directive that is included above, the servers (i.e. NNF nodes) must also be assigned along with the computes.
"},{"location":"guides/user-containers/readme/#running-a-container-workflow","title":"Running a Container Workflow","text":"Once the workflow is created, the WLM progresses it through the following states. This is a quick overview of the container-related behavior that occurs:
- Proposal: Verify storages are provided according to the container profile.
- Setup: If applicable, request ports from NnfPortManager.
- DataIn: No container related activity.
- PreRun: Appropriate
MPIJob
or Job(s)
are created for the workflow. In turn, user containers are created and launched by Kubernetes. Containers are expected to start in this state. - PostRun: Once in PostRun, user containers are expected to complete (non-zero exit) successfully.
- DataOut: No container related activity.
- Teardown: Ports are released;
MPIJob
or Job(s)
are deleted, which in turn deletes the user containers.
The two main states of a container workflow (i.e. PreRun, PostRun) are discussed further in the following sections.
"},{"location":"guides/user-containers/readme/#prerun","title":"PreRun","text":"In PreRun, the containers are created and expected to start. Once the containers reach a non-initialization state (i.e. Running), the containers are considered to be started and the workflow can advance.
By default, containers are expected to start within 60 seconds. If not, the workflow reports an Error that the containers cannot be started. This value is configurable via the preRunTimeoutSeconds
field in the container profile.
To summarize the PreRun behavior:
- If the container starts successfully (running), transition to
Completed
status. - If the container fails to start, transition to the
Error
status. - If the container is initializing and has not started after
preRunTimeoutSeconds
seconds, terminate the container and transition to the Error
status.
"},{"location":"guides/user-containers/readme/#init-containers","title":"Init Containers","text":"The NNF Software injects Init Containers into the container specification to perform initialization tasks. These containers must run to completion before the main container can start.
These initialization tasks include:
- Ensuring the proper permissions (i.e. UID/GID) are available in the main container
- For MPI jobs, ensuring the launcher pod can contact each worker pod via DNS
"},{"location":"guides/user-containers/readme/#prerun-completed","title":"PreRun Completed","text":"Once PreRun has transitioned to Completed
status, the user container is now running and the WLM should initiate applications on the compute nodes. Utilizing container ports, the applications on the compute nodes can establish communication with the user containers, which are running on the local NNF node attached to the computes.
This communication allows for the compute node applications to drive certain behavior inside of the user container. For example, once the compute node application is complete, it can signal to the user container that it is time to perform cleanup or data migration action.
"},{"location":"guides/user-containers/readme/#postrun","title":"PostRun","text":"In PostRun, the containers are expected to exit cleanly with a zero exit code. If a container fails to exit cleanly, the Kubernetes software attempts a number of retries based on the configuration of the container profile. It continues to do this until the container exits successfully, or until the retryLimit
is hit - whichever occurs first. In the latter case, the workflow reports an Error.
Read up on the Failure Retries for more information on retries.
Furthermore, the container profile features a postRunTimeoutSeconds
field. If this timeout is reached before the container successfully exits, it triggers an Error
status. The timer for this timeout begins upon entry into the PostRun phase, allowing the containers the specified period to execute before the workflow enters an Error
status.
To recap the PostRun behavior:
- If the container exits successfully, transition to
Completed
status. - If the container exits unsuccessfully after
retryLimit
number of retries, transition to the Error
status. - If the container is running and has not exited after
postRunTimeoutSeconds
seconds, terminate the container and transition to the Error
status.
"},{"location":"guides/user-containers/readme/#failure-retries","title":"Failure Retries","text":"If a container fails (non-zero exit code), the Kubernetes software implements retries. The number of retries can be set via the retryLimit
field in the container profile. If a non-zero exit code is detected, the Kubernetes software creates a new instance of the pod and retries. The default number of retries for retryLimit
is set to 6, which is the default value for Kubernetes Jobs. This means that if the pods fails every single time, there will be 7 failed pods in total since it attempted 6 retries after the first failure.
To understand this behavior more, see Pod backoff failure policy in the Kubernetes documentation. This explains the retry (i.e. backoff) behavior in more detail.
It is important to note that due to the configuration of the MPIJob
and/or Job
that is created for User Containers, the container retries are immediate - there is no backoff timeout between retires. This is due to the NNF Software setting the RestartPolicy
to Never
, which causes a new pod to spin up after every failure rather than re-use (i.e. restart) the previously failed pod. This allows a user to see a complete history of the failed pod(s) and the logs can easily be obtained. See more on this at Handling Pod and container failures in the Kubernetes documentation.
"},{"location":"guides/user-containers/readme/#putting-it-all-together","title":"Putting it All Together","text":"See the NNF Container Example for a working example of how to run a simple MPI application inside of an NNF User Container and run it through a Container Workflow.
"},{"location":"guides/user-containers/readme/#reference","title":"Reference","text":""},{"location":"guides/user-containers/readme/#environment-variables","title":"Environment Variables","text":"Two sets of environment variables are available with container workflows: Container and Compute Node. The former are the variables that are available inside the user containers. The latter are the variables that are provided back to the DWS workflow, which in turn are collected by the WLM and provided to compute nodes. See the WLM documentation for more details.
"},{"location":"guides/user-containers/readme/#container-environment-variables","title":"Container Environment Variables","text":"These variables are provided for use inside the container. They can be used as part of the container command in the NNF Container Profile or within the container itself.
"},{"location":"guides/user-containers/readme/#storages","title":"Storages","text":"Each storage defined by a container profile and used in a container workflow results in a corresponding environment variable. This variable is used to hold the mount directory of the filesystem.
"},{"location":"guides/user-containers/readme/#gfs2-index-mounts","title":"GFS2 Index Mounts","text":"When using a GFS2 file system, each compute is allocated its own NNF volume. The NNF software mounts a collection of directories that are indexed (e.g. 0/
, 1/
, etc) to the compute nodes.
Application authors must be aware that their desired GFS2 mount-point really a collection of directories, one for each compute node. It is the responsibility of the author to understand the underlying filesystem mounted at the storage environment variable (e.g. $DW_JOB_my_gfs2_storage
).
Each compute node's application can leave breadcrumbs (e.g. hostnames) somewhere on the GFS2 filesystem mounted on the compute node. This can be used to identify the index mount directory to a compute node from the application running inside of the user container.
Here is an example of 3 compute nodes on an NNF node targeted in a GFS2 workflow:
$ ls $DW_JOB_my_gfs2_storage/*\n/mnt/nnf/3e92c060-ca0e-4ddb-905b-3d24137cbff4-0/0\n/mnt/nnf/3e92c060-ca0e-4ddb-905b-3d24137cbff4-0/1\n/mnt/nnf/3e92c060-ca0e-4ddb-905b-3d24137cbff4-0/2\n
Node positions are not absolute locations. The WLM could, in theory, select 6 physical compute nodes at physical location 1, 2, 3, 5, 8, 13, which would appear as directories /0
through /5
in the container mount path.
Additionally, not all container instances could see the same number of compute nodes in an indexed-mount scenario. If 17 compute nodes are required for the job, WLM may assign 16 nodes to run one NNF node, and 1 node to another NNF. The first NNF node would have 16 index directories, whereas the 2nd would only contain 1.
"},{"location":"guides/user-containers/readme/#hostnames-and-domains","title":"Hostnames and Domains","text":"Containers can contact one another via Kubernetes cluster networking. This functionality is provided by DNS. Environment variables are provided that allow a user to be able to piece together the FQDN so that the other containers can be contacted.
This example demonstrates an MPI container workflow, with two worker pods. Two worker pods means two pods/containers running on two NNF nodes.
"},{"location":"guides/user-containers/readme/#ports","title":"Ports","text":"See the NNF_CONTAINER_PORTS
section under Compute Node Environment Variables.
mpiuser@my-container-workflow-launcher:~$ env | grep NNF\nNNF_CONTAINER_HOSTNAMES=my-container-workflow-launcher my-container-workflow-worker-0 my-container-workflow-worker-1\nNNF_CONTAINER_DOMAIN=default.svc.cluster.local\nNNF_CONTAINER_SUBDOMAIN=my-container-workflow-worker\n
The container FQDN consists of the following: <HOSTNAME>.<SUBDOMAIN>.<DOMAIN>
. To contact the other worker container from worker 0, my-container-workflow-worker-1.my-container-workflow-worker.default.svc.cluster.local
would be used.
For MPI-based containers, an alternate way to retrieve this information is to look at the default hostfile
, provided by mpi-operator
. This file lists out all the worker nodes' FQDNs:
mpiuser@my-container-workflow-launcher:~$ cat /etc/mpi/hostfile\nmy-container-workflow-worker-0.my-container-workflow-worker.default.svc slots=1\nmy-container-workflow-worker-1.my-container-workflow-worker.default.svc slots=1\n
"},{"location":"guides/user-containers/readme/#compute-node-environment-variables","title":"Compute Node Environment Variables","text":"These environment variables are provided to the compute node via the WLM by way of the DWS Workflow. Note that these environment variables are consistent across all the compute nodes for a given workflow.
Note
It's important to note that the variables presented here pertain exclusively to User Container-related variables. This list does not encompass the entirety of NNF environment variables accessible to the compute node through the Workload Manager (WLM)
"},{"location":"guides/user-containers/readme/#nnf_container_ports","title":"NNF_CONTAINER_PORTS
","text":"If the NNF Container Profile requests container ports, then this environment variable provides the allocated ports for the container. This is a comma separated list of ports if multiple ports are requested.
This allows an application on the compute node to contact the user container running on its local NNF node via these port numbers. The compute node must have proper routing to the NNF Node and needs a generic way of contacting the NNF node. It is suggested than a DNS entry is provided via /etc/hosts
, or similar.
For cases where one port is requested, the following can be used to contact the user container running on the NNF node (assuming a DNS entry for local-rabbit
is provided via /etc/hosts
).
local-rabbit:$(NNF_CONTAINER_PORTS)\n
"},{"location":"guides/user-containers/readme/#creating-images","title":"Creating Images","text":"For details, refer to the NNF Container Example Readme. However, in broad terms, an image that is capable of supporting MPI necessitates the following components:
- User Application: Your specific application
- Open MPI: Incorporate Open MPI to facilitate MPI operations
- SSH Server: Including an SSH server to enable communication
- nslookup: To validate Launcher/Worker container communication over the network
By ensuring the presence of these components, users can create an image that supports MPI operations on the NNF platform.
The nnf-mfu image serves as a suitable base image, encompassing all the essential components required for this purpose.
"},{"location":"guides/user-containers/readme/#using-a-private-container-repository","title":"Using a Private Container Repository","text":"The user's containerized application may be placed in a private repository . In this case, the user must define an access token to be used with that repository, and that token must be made available to the Rabbit's Kubernetes environment so that it can pull that container from the private repository.
See Pull an Image from a Private Registry in the Kubernetes documentation for more information.
"},{"location":"guides/user-containers/readme/#about-the-example","title":"About the Example","text":"Each container registry will have its own way of letting its users create tokens to be used with their repositories . Docker Hub will be used for the private repository in this example, and the user's account on Docker Hub will be \"dean\".
"},{"location":"guides/user-containers/readme/#preparing-the-private-repository","title":"Preparing the Private Repository","text":"The user's application container is named \"red-rock-slushy\" . To store this container on Docker Hub the user must log into docker.com with their browser and click the \"Create repository\" button to create a repository named \"red-rock-slushy\", and the user must check the box that marks the repository as private . The repository's name will be displayed as \"dean/red-rock-slushy\" with a lock icon to show that it is private.
"},{"location":"guides/user-containers/readme/#create-and-push-a-container","title":"Create and Push a Container","text":"The user will create their container image in the usual ways, naming it for their private repository and tagging it according to its release.
Prior to pushing images to the repository, the user must complete a one-time login to the Docker registry using the docker command-line tool.
docker login -u dean\n
After completing the login, the user may then push their images to the repository.
docker push dean/red-rock-slushy:v1.0\n
"},{"location":"guides/user-containers/readme/#generate-a-read-only-token","title":"Generate a Read-Only Token","text":"A read-only token must be generated to allow Kubernetes to pull that container image from the private repository, because Kubernetes will not be running as that user . This token must be given to the administrator, who will use it to create a Kubernetes secret.
To log in and generate a read-only token to share with the administrator, the user must follow these steps:
- Visit docker.com and log in using their browser.
- Click on the username in the upper right corner.
- Select \"Account Settings\" and navigate to \"Security\".
- Click the \"New Access Token\" button to create a read-only token.
- Keep a copy of the generated token to share with the administrator.
"},{"location":"guides/user-containers/readme/#store-the-read-only-token-as-a-kubernetes-secret","title":"Store the Read-Only Token as a Kubernetes Secret","text":"The administrator must store the user's read-only token as a kubernetes secret . The secret must be placed in the default
namespace, which is the same namespace where the user containers will be run . The secret must include the user's Docker Hub username and the email address they have associated with that username . In this case, the secret will be named readonly-red-rock-slushy
.
USER_TOKEN=users-token-text\nUSER_NAME=dean\nUSER_EMAIL=dean@myco.com\nSECRET_NAME=readonly-red-rock-slushy\nkubectl create secret docker-registry $SECRET_NAME -n default --docker-server=\"https://index.docker.io/v1/\" --docker-username=$USER_NAME --docker-password=$USER_TOKEN --docker-email=$USER_EMAIL\n
"},{"location":"guides/user-containers/readme/#add-the-secret-to-the-nnfcontainerprofile","title":"Add the Secret to the NnfContainerProfile","text":"The administrator must add an imagePullSecrets
list to the NnfContainerProfile resource that was created for this user's containerized application.
The following profile shows the placement of the readonly-red-rock-slushy
secret which was created in the previous step, and points to the user's dean/red-rock-slushy:v1.0
container.
apiVersion: nnf.cray.hpe.com/v1alpha1\nkind: NnfContainerProfile\nmetadata:\n name: red-rock-slushy\n namespace: nnf-system\ndata:\n pinned: false\n retryLimit: 6\n spec:\n imagePullSecrets:\n - name: readonly-red-rock-slushy\n containers:\n - command:\n - /users-application\n image: dean/red-rock-slushy:v1.0\n name: red-rock-app\n storages:\n - name: DW_JOB_foo_local_storage\n optional: false\n - name: DW_PERSISTENT_foo_persistent_storage\n optional: true\n
Now any user can select this profile in their Workflow by specifying it in a #DW container
directive.
#DW container profile=red-rock-slushy [...]\n
"},{"location":"guides/user-containers/readme/#using-a-private-container-repository-for-mpi-application-containers","title":"Using a Private Container Repository for MPI Application Containers","text":"If our user's containerized application instead contains an MPI application, because perhaps it's a private copy of nnf-mfu, then the administrator would insert two imagePullSecrets
lists into the mpiSpec
of the NnfContainerProfile for the MPI launcher and the MPI worker.
apiVersion: nnf.cray.hpe.com/v1alpha1\nkind: NnfContainerProfile\nmetadata:\n name: mpi-red-rock-slushy\n namespace: nnf-system\ndata:\n mpiSpec:\n mpiImplementation: OpenMPI\n mpiReplicaSpecs:\n Launcher:\n template:\n spec:\n imagePullSecrets:\n - name: readonly-red-rock-slushy\n containers:\n - command:\n - mpirun\n - dcmp\n - $(DW_JOB_foo_local_storage)/0\n - $(DW_JOB_foo_local_storage)/1\n image: dean/red-rock-slushy:v2.0\n name: red-rock-launcher\n Worker:\n template:\n spec:\n imagePullSecrets:\n - name: readonly-red-rock-slushy\n containers:\n - image: dean/red-rock-slushy:v2.0\n name: red-rock-worker\n runPolicy:\n cleanPodPolicy: Running\n suspend: false\n slotsPerWorker: 1\n sshAuthMountPath: /root/.ssh\n pinned: false\n retryLimit: 6\n storages:\n - name: DW_JOB_foo_local_storage\n optional: false\n - name: DW_PERSISTENT_foo_persistent_storage\n optional: true\n
Now any user can select this profile in their Workflow by specifying it in a #DW container
directive.
#DW container profile=mpi-red-rock-slushy [...]\n
"},{"location":"guides/user-interactions/readme/","title":"Rabbit User Interactions","text":""},{"location":"guides/user-interactions/readme/#overview","title":"Overview","text":"A user may include one or more Data Workflow directives in their job script to request Rabbit services. Directives take the form #DW [command] [command args]
, and are passed from the workload manager to the Rabbit software for processing. The directives can be used to allocate Rabbit file systems, copy files, and run user containers on the Rabbit nodes.
Once the job is running on compute nodes, the application can find access to Rabbit specific resources through a set of environment variables that provide mount and network access information.
"},{"location":"guides/user-interactions/readme/#commands","title":"Commands","text":""},{"location":"guides/user-interactions/readme/#jobdw","title":"jobdw","text":"The jobdw
directive command tells the Rabbit software to create a file system on the Rabbit hardware for the lifetime of the user's job. At the end of the job, any data that is not moved off of the file system either by the application or through a copy_out
directive will be lost. Multiple jobdw
directives can be listed in the same job script.
"},{"location":"guides/user-interactions/readme/#command-arguments","title":"Command Arguments","text":"Argument Required Value Notes type
Yes raw
, xfs
, gfs2
, lustre
Type defines how the storage should be formatted. For Lustre file systems, a single file system is created that is mounted by all computes in the job. For raw, xfs, and GFS2 storage, a separate file system is allocated for each compute node. capacity
Yes Allocation size with units. 1TiB
, 100GB
, etc. Capacity interpretation varies by storage type. For Lustre file systems, capacity is the aggregate OST capacity. For raw, xfs, and GFS2 storage, capacity is the capacity of the file system for a single compute node. Capacity suffixes are: KB
, KiB
, MB
, MiB
, GB
, GiB
, TB
, TiB
name
Yes String including numbers and '-' This is a name for the storage allocation that is unique within a job profile
No Profile name This specifies which profile to use when allocating storage. Profiles include mkfs
and mount
arguments, file system layout, and many other options. Profiles are created by admins. When no profile is specified, the default profile is used. More information about storage profiles can be found in the Storage Profiles guide. requires
No copy-offload
Using this option results in the copy offload daemon running on the compute nodes. This is for users that want to initiate data movement to or from the Rabbit storage from within their application. See the Required Daemons section of the Directive Breakdown guide for a description of how the user may request the daemon, in the case where the WLM will run it only on demand."},{"location":"guides/user-interactions/readme/#examples","title":"Examples","text":"#DW jobdw type=xfs capacity=10GiB name=scratch\n
This directive results in a 10GiB xfs file system created for each compute node in the job using the default storage profile.
#DW jobdw type=lustre capacity=1TB name=dw-temp profile=high-metadata\n
This directive results in a single 1TB Lustre file system being created that can be accessed from all the compute nodes in the job. It is using a storage profile that an admin created to give high Lustre metadata performance.
#DW jobdw type=gfs2 capacity=50GB name=checkpoint requires=copy-offload\n
This directive results in a 50GB GFS2 file system created for each compute node in the job using the default storage profile. The copy-offload daemon is started on the compute node to allow the application to request the Rabbit to move data from the GFS2 file system to another file system while the application is running using the Copy Offload API.
"},{"location":"guides/user-interactions/readme/#create_persistent","title":"create_persistent","text":"The create_persistent
command results in a storage allocation on the Rabbit nodes that lasts beyond the lifetime of the job. This is useful for creating a file system that can share data between jobs. Only a single create_persistent
directive is allowed in a job, and it cannot be in the same job as a destroy_persistent
directive. See persistentdw to utilize the storage in a job.
"},{"location":"guides/user-interactions/readme/#command-arguments_1","title":"Command Arguments","text":"Argument Required Value Notes type
Yes raw
, xfs
, gfs2
, lustre
Type defines how the storage should be formatted. For Lustre file systems, a single file system is created. For raw, xfs, and GFS2 storage, a separate file system is allocated for each compute node in the job. capacity
Yes Allocation size with units. 1TiB
, 100GB
, etc. Capacity interpretation varies by storage type. For Lustre file systems, capacity is the aggregate OST capacity. For raw, xfs, and GFS2 storage, capacity is the capacity of the file system for a single compute node. Capacity suffixes are: KB
, KiB
, MB
, MiB
, GB
, GiB
, TB
, TiB
name
Yes Lowercase string including numbers and '-' This is a name for the storage allocation that is unique within the system profile
No Profile name This specifies which profile to use when allocating storage. Profiles include mkfs
and mount
arguments, file system layout, and many other options. Profiles are created by admins. When no profile is specified, the default profile is used. The profile used when creating the persistent storage allocation is the same profile used by jobs that use the persistent storage. More information about storage profiles can be found in the Storage Profiles guide."},{"location":"guides/user-interactions/readme/#examples_1","title":"Examples","text":"#DW create_persistent type=xfs capacity=100GiB name=scratch\n
This directive results in a 100GiB xfs file system created for each compute node in the job using the default storage profile. Since xfs file systems are not network accessible, subsequent jobs that want to use the file system must have the same number of compute nodes, and be scheduled on compute nodes with access to the correct Rabbit nodes. This means the job with the create_persistent
directive must schedule the desired number of compute nodes even if no application is run on the compute nodes as part of the job.
#DW create_persistent type=lustre capacity=10TiB name=shared-data profile=read-only\n
This directive results in a single 10TiB Lustre file system being created that can be accessed later by any compute nodes in the system. Multiple jobs can access a Rabbit Lustre file system at the same time. This job can be scheduled with a single compute node (or zero compute nodes if the WLM allows), without any limitations on compute node counts for subsequent jobs using the persistent Lustre file system.
"},{"location":"guides/user-interactions/readme/#destroy_persistent","title":"destroy_persistent","text":"The destroy_persistent
command will delete persistent storage that was allocated by a corresponding create_persistent
. If the persistent storage is currently in use by a job, then the job containing the destroy_persistent
command will fail. Only a single destroy_persistent
directive is allowed in a job, and it cannot be in the same job as a create_persistent
directive.
"},{"location":"guides/user-interactions/readme/#command-arguments_2","title":"Command Arguments","text":"Argument Required Value Notes name
Yes Lowercase string including numbers and '-' This is a name for the persistent storage allocation that will be destroyed"},{"location":"guides/user-interactions/readme/#examples_2","title":"Examples","text":"#DW destroy_persistent name=shared-data\n
This directive will delete the persistent storage allocation with the name shared-data
"},{"location":"guides/user-interactions/readme/#persistentdw","title":"persistentdw","text":"The persistentdw
command makes an existing persistent storage allocation available to a job. The persistent storage must already be created from a create_persistent
command in a different job script. Multiple persistentdw
commands can be used in the same job script to request access to multiple persistent allocations.
Persistent Lustre file systems can be accessed from any compute nodes in the system, and the compute node count for the job can vary as needed. Multiple jobs can access a persistent Lustre file system concurrently if desired. Raw, xfs, and GFS2 file systems can only be accessed by compute nodes that have a physical connection to the Rabbits hosting the storage, and jobs accessing these storage types must have the same compute node count as the job that made the persistent storage.
"},{"location":"guides/user-interactions/readme/#command-arguments_3","title":"Command Arguments","text":"Argument Required Value Notes name
Yes Lowercase string including numbers and '-' This is a name for the persistent storage that will be accessed requires
No copy-offload
Using this option results in the copy offload daemon running on the compute nodes. This is for users that want to initiate data movement to or from the Rabbit storage from within their application. See the Required Daemons section of the Directive Breakdown guide for a description of how the user may request the daemon, in the case where the WLM will run it only on demand."},{"location":"guides/user-interactions/readme/#examples_3","title":"Examples","text":"#DW persistentdw name=shared-data requires=copy-offload\n
This directive will cause the shared-data
persistent storage allocation to be mounted onto the compute nodes for the job application to use. The copy-offload daemon will be started on the compute nodes so the application can request data movement during the application run.
"},{"location":"guides/user-interactions/readme/#copy_incopy_out","title":"copy_in/copy_out","text":"The copy_in
and copy_out
directives are used to move data to and from the storage allocations on Rabbit nodes. The copy_in
directive requests that data be moved into the Rabbit file system before application launch, and the copy_out
directive requests data to be moved off of the Rabbit file system after application exit. This is different from data-movement that is requested through the copy-offload API, which occurs during application runtime. Multiple copy_in
and copy_out
directives can be included in the same job script. More information about data movement can be found in the Data Movement documentation.
"},{"location":"guides/user-interactions/readme/#command-arguments_4","title":"Command Arguments","text":"Argument Required Value Notes source
Yes [path]
, $DW_JOB_[name]/[path]
, $DW_PERSISTENT_[name]/[path]
[name]
is the name of the Rabbit persistent or job storage as specified in the name
argument of the jobdw
or persistentdw
directive. Any '-'
in the name from the jobdw
or persistentdw
directive should be changed to a '_'
in the copy_in
and copy_out
directive. destination
Yes [path]
, $DW_JOB_[name]/[path]
, $DW_PERSISTENT_[name]/[path]
[name]
is the name of the Rabbit persistent or job storage as specified in the name
argument of the jobdw
or persistentdw
directive. Any '-'
in the name from the jobdw
or persistentdw
directive should be changed to a '_'
in the copy_in
and copy_out
directive. profile
No Profile name This specifies which profile to use when copying data. Profiles specify the copy command to use, MPI arguments, and how output gets logged. If no profile is specified then the default profile is used. Profiles are created by an admin."},{"location":"guides/user-interactions/readme/#examples_4","title":"Examples","text":"#DW jobdw type=xfs capacity=10GiB name=fast-storage\n#DW copy_in source=/lus/backup/johndoe/important_data destination=$DW_JOB_fast_storage/data\n
This set of directives creates an xfs file system on the Rabbits for each compute node in the job, and then moves data from /lus/backup/johndoe/important_data
to each of the xfs file systems. /lus/backup
must be set up in the Rabbit software as a Global Lustre file system by an admin. The copy takes place before the application is launched on the compute nodes.
#DW persistentdw name=shared-data1\n#DW persistentdw name=shared-data2\n\n#DW copy_out source=$DW_PERSISTENT_shared_data1/a destination=$DW_PERSISTENT_shared_data2/a profile=no-xattr\n#DW copy_out source=$DW_PERSISTENT_shared_data1/b destination=$DW_PERSISTENT_shared_data2/b profile=no-xattr\n
This set of directives copies two directories from one persistent storage allocation to another persistent storage allocation using the no-xattr
profile to avoid copying xattrs. This data movement occurs after the job application exits on the compute nodes, and the two copies do not occur in a deterministic order.
#DW persistentdw name=shared-data\n#DW jobdw type=lustre capacity=1TiB name=fast-storage profile=high-metadata\n\n#DW copy_in source=/lus/shared/johndoe/shared-libraries destination=$DW_JOB_fast_storage/libraries\n#DW copy_in source=$DW_PERSISTENT_shared_data/ destination=$DW_JOB_fast_storage/data\n\n#DW copy_out source=$DW_JOB_fast_storage/data destination=/lus/backup/johndoe/very_important_data profile=no-xattr\n
This set of directives makes use of a persistent storage allocation and a job storage allocation. There are two copy_in
directives, one that copies data from the global lustre file system to the job allocation, and another that copies data from the persistent allocation to the job allocation. These copies do not occur in a deterministic order. The copy_out
directive occurs after the application has exited, and copies data from the Rabbit job storage to a global lustre file system.
"},{"location":"guides/user-interactions/readme/#container","title":"container","text":"The container
directive is used to launch user containers on the Rabbit nodes. The containers have access to jobdw
, persistentdw
, or global Lustre storage as specified in the container
directive. More documentation for user containers can be found in the User Containers guide. Only a single container
directive is allowed in a job.
"},{"location":"guides/user-interactions/readme/#command-arguments_5","title":"Command Arguments","text":"Argument Required Value Notes name
Yes Lowercase string including numbers and '-' This is a name for the container instance that is unique within a job profile
Yes Profile name This specifies which container profile to use. The container profile contains information about which container to run, which file system types to expect, which network ports are needed, and many other options. An admin is responsible for creating the container profiles. DW_JOB_[expected]
No jobdw
storage allocation name
The container profile will list jobdw
file systems that the container requires. [expected]
is the name as specified in the container profile DW_PERSISTENT_[expected]
No persistentdw
storage allocation name
The container profile will list persistentdw
file systems that the container requires. [expected]
is the name as specified in the container profile DW_GLOBAL_[expected]
No Global lustre path The container profile will list global Lustre file systems that the container requires. [expected]
is the name as specified in the container profile"},{"location":"guides/user-interactions/readme/#examples_5","title":"Examples","text":"#DW jobdw type=xfs capacity=10GiB name=fast-storage\n#DW container name=backup profile=automatic-backup DW_JOB_source=fast-storage DW_GLOBAL_destination=/lus/backup/johndoe\n
These directives create an xfs Rabbit job allocation and specify a container that should run on the Rabbit nodes. The container profile specified two file systems that the container needs, DW_JOB_source
and DW_GLOBAL_destination
. DW_JOB_source
requires a jobdw
file system and DW_GLOBAL_destination
requires a global Lustre file system.
"},{"location":"guides/user-interactions/readme/#environment-variables","title":"Environment Variables","text":"The WLM makes a set of environment variables available to the job application running on the compute nodes that provide Rabbit specific information. These environment variables are used to find the mount location of Rabbit file systems and port numbers for user containers.
Environment Variable Value Notes DW_JOB_[name]
Mount path of a jobdw
file system [name]
is from the name
argument in the jobdw
directive. Any '-'
characters in the name
will be converted to '_'
in the environment variable. There will be one of these environment variables per jobdw
directive in the job. DW_PERSISTENT_[name]
Mount path of a persistentdw
file system [name]
is from the name
argument in the persistentdw
directive. Any '-'
characters in the name
will be converted to '_'
in the environment variable. There will be one of these environment variables per persistentdw
directive in the job. NNF_CONTAINER_PORTS
Comma separated list of ports These ports are used together with the IP address of the local Rabbit to communicate with a user container specified by a container
directive. More information can be found in the User Containers guide."},{"location":"repo-guides/readme/","title":"Repo Guides","text":""},{"location":"repo-guides/readme/#management","title":"Management","text":""},{"location":"repo-guides/release-nnf-sw/readme/","title":"Releasing NNF Software","text":""},{"location":"repo-guides/release-nnf-sw/readme/#nnf-software-overview","title":"NNF Software Overview","text":"The following repositories comprise the NNF Software and each have their own versions. There is a hierarchy, since nnf-deploy
packages the individual components together using submodules.
Each component under nnf-deploy
needs to be released first, then nnf-deploy
can be updated to point to those release versions, then nnf-deploy
itself can be updated and released.
The documentation repo (NearNodeFlash/NearNodeFlash.github.io) is released separately and is not part of nnf-deploy
, but it should match the version number of nnf-deploy
. Release this like the other components.
nnf-ec is vendored in as part of nnf-sos
and does not need to be released separately.
"},{"location":"repo-guides/release-nnf-sw/readme/#primer","title":"Primer","text":"This document is based on the process set forth by the DataWorkflowServices Release Process. Please read that as a background for this document before going any further.
"},{"location":"repo-guides/release-nnf-sw/readme/#requirements","title":"Requirements","text":"To create tags and releases, you will need maintainer or admin rights on the repos.
"},{"location":"repo-guides/release-nnf-sw/readme/#release-each-component-in-nnf-deploy","title":"Release Each Component In nnf-deploy
","text":"You'll first need to create releases for each component contained in nnf-deploy
. This section describes that process.
Each release branch needs to be updated with what is on master. To do that, we'll need the latest copy of master, and it will ultimately be merged to the releases/v0
branch via a Pull Request. Once merged, an annotated tag is created and then a release.
Each component has its own version number that needs to be incremented. Make sure you change the version numbers in the commands below to match the new version for the component. The v0.0.3
is just an example.
-
Ensure your branches are up to date:
git checkout master\ngit pull\ngit checkout releases/v0\ngit pull\n
-
Create a branch to merge into the release branch:
git checkout -b release-v0.0.3\n
-
Merge in the updates from the master
branch. There should not be any conflicts, but it's not unheard of. Tread carefully if there are conflicts.
git merge master\n
-
Verify that there are no differences between your branch and the master branch:
git diff master\n
If there are any differences, they must be trivial. Some READMEs may have extra lines at the end.
-
Perform repo-specific updates:
- For
lustre-csi-driver
, lustre-fs-operator
, dws
, nnf-sos
, and nnf-dm
there are additional files that need to track the version number as well, which allow them to be installed with kubectl apply -k
.
Repo Update nnf-mfu
The new version of nnf-mfu
is referenced by the NNFMFU
variable in several places:nnf-sos
1. Makefile
replace NNFMFU
with nnf-mfu's
tag.nnf-dm
1. In Dockerfile
and Makefile
, replace NNFMFU_VERSION
with the new version.2. In config/manager/kustomization.yaml
, replace nnf-mfu
's newTag: <X.Y.Z>.
nnf-deploy
1. In config/repositories.yaml
replace NNFMFU_VERSION
with the new version. lustre-fs-operator
update config/manager/kustomization.yaml
with the correct version.nnf-deploy
1. In config/repositories.yaml
replace the lustre-fs-operator version. dws
update config/manager/kustomization.yaml
with the correct version. nnf-sos
update config/manager/kustomization.yaml
with the correct version. nnf-dm
update config/manager/kustomization.yaml
with the correct version. lustre-csi-driver
update deploy/kubernetes/base/kustomization.yaml
and charts/lustre-csi-driver/values.yaml
with the correct version.nnf-deploy
1. In config/repositories.yaml
replace the lustre-csi-driver version. -
Target the releases/v0
branch with a Pull Request from your branch. When merging the Pull Request, you must use a Merge Commit.
Note
Do not Rebase or Squash! Those actions remove the records that Git uses to determine which commits have been merged, and then when the next release is created Git will treat everything like a conflict. Additionally, this will cause auto-generated release notes to include the previous release.
-
Once merged, update the release branch locally and create an annotated tag. Each repo has a workflow job named create_release
that will create a release automatically when the new tag is pushed.
git checkout releases/v0\ngit pull\ngit tag -a v0.0.3 -m \"Release v0.0.3\"\ngit push origin --tags\n
-
GOTO Step 1 and repeat this process for each remaining component.
"},{"location":"repo-guides/release-nnf-sw/readme/#release-nnf-deploy","title":"Release nnf-deploy
","text":"Once the individual components are released, we need to update the submodules in nnf-deploy's
master
branch before we create the release branch. This ensures that everything is current on master
for nnf-deploy
.
-
Update the submodules for nnf-deploy
on master:
cd nnf-deploy\ngit checkout master\ngit pull\ngit submodule foreach git checkout master\ngit submodule foreach git pull\n
-
Create a branch to capture the submodule changes for the PR to master
git checkout -b update-submodules\n
-
Commit the changes and open a Pull Request against the master
branch.
-
Once merged, follow steps 1-3 from the previous section to create a release branch off of releases/v0
and update it with changes from master
.
-
There will be conflicts for the submodules after step 3. This is expected. Update the submodules to the new tags and then commit the changes. If each tag was committed properly, the following command can do this for you:
git submodule foreach 'git checkout `git describe --match=\"v*\" HEAD`'\n
-
Add each submodule to the commit with git add
.
-
Verify that each submodule is now at the proper tagged version.
git submodule\n
-
Update config/repositories.yaml
with the referenced versions for:
lustre-csi-driver
lustre-fs-operator
nnf-mfu
(Search for NNFMFU_VERSION)
-
Tidy and make nnf-deploy
to avoid embarrassment.
go mod tidy\nmake\n
-
Do another git add
for any changes, particularly go.mod
and/or go.sum
.
-
Verify that git status
is happy with nnf-deploy
and then finalize the merge from master by with a git commit
.
-
Follow steps 6-7 from the previous section to finalize the release of nnf-deploy
.
"},{"location":"repo-guides/release-nnf-sw/readme/#release-nearnodeflashgithubio","title":"Release NearNodeFlash.github.io
","text":"Please review and update the documentation for changes you may have made.
After nnf-deploy has a release tag, you may release the documentation. Use the same steps found above in \"Release Each Component\". Note that the default branch for this repo is \"main\" instead of \"master\".
Give this release a tag that matches the nnf-deploy release, to show that they go together. Create the release by using the \"Create release\" or \"Draft a new release\" button in the GUI, or by using the gh release create
CLI command. Whether using the GUI or the CLI, mark the release as \"latest\" and select the appropriate option to generate release notes.
Wait for the mike
tool in .github/workflow/release.yaml
to finish building the new doc. You can check its status by going to the gh-pages
branch in the repo. When you visit the release at https://nearnodeflash.github.io, you should see the new release in the drop-down menu and the new release should be the default display.
The software is now released!
"},{"location":"repo-guides/release-nnf-sw/readme/#clone-a-release","title":"Clone a release","text":"The follow commands clone release v0.0.7
into nnf-deploy-v0.0.7
export NNF_VERSION=v0.0.7\n\ngit clone --recurse-submodules git@github.com:NearNodeFlash/nnf-deploy nnf-deploy-$NNF_VERSION\ncd nnf-deploy-$NNF_VERSION\ngit -c advice.detachedHead=false checkout $NNF_VERSION --recurse-submodules\n\ngit submodule status\n
"},{"location":"rfcs/","title":"Request for Comment","text":" -
Rabbit Request For Comment Process - Published
-
Rabbit Storage For Containerized Applications - Published
"},{"location":"rfcs/0001/readme/","title":"Rabbit Request For Comment Process","text":"Rabbit software must be designed in close collaboration with our end-users. Part of this process involves open discussion in the form of Request For Comment (RFC) documents. The remainder of this document presents the RFC process for Rabbit.
"},{"location":"rfcs/0001/readme/#history-philosophy","title":"History & Philosophy","text":"NNF RFC documents are modeled after the long history of IETF RFC documents that describe the internet. The philosophy is captured best in RFC 3
The content of a [...] note may be any thought, suggestion, etc. related to the HOST software or other aspect of the network. Notes are encouraged to be timely rather than polished. Philosophical positions without examples or other specifics, specific suggestions or implementation techniques without introductory or background explication, and explicit questions without any attempted answers are all acceptable. The minimum length for a [...] note is one sentence.
These standards (or lack of them) are stated explicitly for two reasons. First, there is a tendency to view a written statement as ipso facto authoritative, and we hope to promote the exchange and discussion of considerably less than authoritative ideas. Second, there is a natural hesitancy to publish something unpolished, and we hope to ease this inhibition.
"},{"location":"rfcs/0001/readme/#when-to-create-an-rfc","title":"When to Create an RFC","text":"New features, improvements, and other tasks that need to source feedback from multiple sources are to be written as Request For Comment (RFC) documents.
"},{"location":"rfcs/0001/readme/#metadata","title":"Metadata","text":"At the start of each RFC, there must include a short metadata block that contains information useful for filtering and sorting existing documents. This markdown is not visible inside the document.
---\nauthors: John Doe <john.doe@company.com>, Jane Doe <jane.doe@company.com>\nstate: prediscussion|ideation|discussion|published|committed|abandoned\ndiscussion: (link to PR, if available)\n----\n
"},{"location":"rfcs/0001/readme/#creation","title":"Creation","text":"An RFC should be created at the next freely available 4-digit index the GitHub RFC folder. Create a folder for your RFC and write your RFC document as readme.md
using standard Markdown. Include additional documents or images in the folder if needed.
Add an entry to /docs/rfcs/index.md
Add an entry to /mkdocs.yml
in the nav[RFCs]
section
"},{"location":"rfcs/0001/readme/#push","title":"Push","text":"Push your changes to your RFC branch
git add --all\ngit commit -s -m \"[####]: Your Request For Comment Document\"\ngit push origin ####\n
"},{"location":"rfcs/0001/readme/#pull-request","title":"Pull Request","text":"Submit a PR for your branch. This will open your RFC to comments. Add those individuals who are interested in your RFC as reviewers.
"},{"location":"rfcs/0001/readme/#merge","title":"Merge","text":"Once consensus has been reached on your RFC, merge to main origin.
"},{"location":"rfcs/0002/readme/","title":"Rabbit storage for containerized applications","text":"Note
This RFC contains outdated information. For the most up-to-date details, please refer to the User Containers documentation.
For Rabbit to provide storage to a containerized application there needs to be some mechanism. The remainder of this RFC proposes that mechanism.
"},{"location":"rfcs/0002/readme/#actors","title":"Actors","text":"There are several actors involved:
- The AUTHOR of the containerized application
- The ADMINISTRATOR who works with the author to determine the application requirements for execution
- The USER who intends to use the application using the 'container' directive in their job specification
- The RABBIT software that interprets the #DWs and starts the container during execution of the job
There are multiple relationships between the actors:
- AUTHOR to ADMINISTRATOR: The author tells the administrator how their application is executed and the NNF storage requirements.
- Between the AUTHOR and USER: The application expects certain storage, and the #DW must meet those expectations.
- ADMINISTRATOR to RABBIT: Admin tells Rabbit how to run the containerized application with the required storage.
- Between USER and RABBIT: User provides the #DW container directive in the job specification. Rabbit validates and interprets the directive.
"},{"location":"rfcs/0002/readme/#proposal","title":"Proposal","text":"The proposal below outlines the high level behavior of running containers in a workflow:
- The AUTHOR writes their application expecting NNF Storage at specific locations. For each storage requirement, they define:
- a unique name for the storage which can be referenced in the 'container' directive
- the required mount path or mount path prefix
- other constraints or storage requirements (e.g. minimum capacity)
- The AUTHOR works with the ADMINISTRATOR to define:
- a unique name for the program to be referred by USER
- the pod template or MPI Job specification for executing their program
- the NNF storage requirements described above.
- The ADMINISTRATOR creates a corresponding NNF Container Profile Kubernetes custom resource with the necessary NNF storage requirements and pod specification as described by the AUTHOR
- The USER who desires to use the application works with the AUTHOR and the related NNF Container Profile to understand the storage requirements
- The USER submits a WLM job with the #DW container directive variables populated
- WLM runs the workflow and drives it through the following stages...
Proposal
: RABBIT validates the #DW container directive by comparing the supplied values to those listed in the NNF Container Profile. If the workflow fails to meet the requirements, the job fails PreRun
: RABBIT software: - duplicates the pod template specification from the Container Profile and patches the necessary Volumes and the config map. The spec is used as the basis for starting the necessary pods and containers
- creates a config map reflecting the storage requirements and any runtime parameters; this is provided to the container at the volume mount named
nnf-config
, if specified
- The containerized application(s) executes. The expected mounts are available per the requirements and celebration occurs. The pods continue to run until:
- a pod completes successfully (any failed pods will be retried)
- the max number of pod retries is hit (indicating failure on all retry attempts)
- Note: retry limit is non-optional per Kubernetes configuration
- If retries are not desired, this number could be set to 0 to disable any retry attempts
PostRun
: RABBIT software: - marks the stage as
Ready
if the pods have all completed successfully. This includes a successful retry after preceding failures - starts a timer for any running pods. Once the timeout is hit, the pods will be killed and the workflow will indicate failure
- leaves all pods around for log inspection
"},{"location":"rfcs/0002/readme/#container-assignment-to-rabbit-nodes","title":"Container Assignment to Rabbit Nodes","text":"During Proposal
, the USER must assign compute nodes for the container workflow. The assigned compute nodes determine which Rabbit nodes run the containers.
"},{"location":"rfcs/0002/readme/#container-definition","title":"Container Definition","text":"Containers can be launched in two ways:
- MPI Jobs
- Non-MPI Jobs
MPI Jobs are launched using mpi-operator
. This uses a launcher/worker model. The launcher pod is responsible for running the mpirun
command that will target the worker pods to run the MPI application. The launcher will run on the first targeted NNF node and the workers will run on each of the targeted NNF nodes.
For Non-MPI jobs, mpi-operator
is not used. This model runs the same application on each of the targeted NNF nodes.
The NNF Container Profile allows a user to pick one of these methods. Each method is defined in similar, but different fashions. Since MPI Jobs use mpi-operator
, the MPIJobSpec
is used to define the container(s). For Non-MPI Jobs a PodSpec
is used to define the container(s).
An example of an MPI Job is below. The data.mpiSpec
field is defined:
kind: NnfContainerProfile\napiVersion: nnf.cray.hpe.com/v1alpha1\ndata:\n mpiSpec:\n mpiReplicaSpecs:\n Launcher:\n template:\n spec:\n containers:\n - command:\n - mpirun\n - dcmp\n - $(DW_JOB_foo_local_storage)/0\n - $(DW_JOB_foo_local_storage)/1\n image: ghcr.io/nearnodeflash/nnf-mfu:latest\n name: example-mpi\n Worker:\n template:\n spec:\n containers:\n - image: ghcr.io/nearnodeflash/nnf-mfu:latest\n name: example-mpi\n slotsPerWorker: 1\n...\n
An example of a Non-MPI Job is below. The data.spec
field is defined:
kind: NnfContainerProfile\napiVersion: nnf.cray.hpe.com/v1alpha1\ndata:\n spec:\n containers:\n - command:\n - /bin/sh\n - -c\n - while true; do date && sleep 5; done\n image: alpine:latest\n name: example-forever\n...\n
In both cases, the spec
is used as a starting point to define the containers. NNF software supplements the specification to add functionality (e.g. mounting #DW storages). In other words, what you see here will not be the final spec for the container that ends up running as part of the container workflow.
"},{"location":"rfcs/0002/readme/#security","title":"Security","text":"The workflow's UID and GID are used to run the container application and for mounting the specified fileystems in the container. Kubernetes allows for a way to define permissions for a container using a Security Context.
mpirun
uses ssh
to communicate with the worker nodes. ssh
requires that UID is assigned to a username. Since the UID/GID are dynamic values from the workflow, work must be done to the container's /etc/passwd
to map the UID/GID to a username. An InitContainer
is used to modify /etc/passwd
and mount it into the container.
"},{"location":"rfcs/0002/readme/#communication-details","title":"Communication Details","text":"The following subsections outline the proposed communication between the Rabbit nodes themselves and the Compute nodes.
"},{"location":"rfcs/0002/readme/#rabbit-to-rabbit-communication","title":"Rabbit-to-Rabbit Communication","text":""},{"location":"rfcs/0002/readme/#non-mpi-jobs","title":"Non-MPI Jobs","text":"Each rabbit node can be reached via <hostname>.<subdomain>
using DNS. The hostname is the Rabbit node name and the workflow name is used for the subdomain.
For example, a workflow name of foo
that targets rabbit-node2
would be rabbit-node2.foo
.
Environment variables are provided to the container and ConfigMap for each rabbit that is targeted by the container workflow:
NNF_CONTAINER_NODES=rabbit-node2 rabbit-node3\nNNF_CONTAINER_SUBDOMAIN=foo\nNNF_CONTAINER_DOMAIN=default.svc.cluster.local\n
kind: ConfigMap\napiVersion: v1\ndata:\n nnfContainerNodes:\n - rabbit-node2\n - rabbit-node3\n nnfContainerSubdomain: foo\n nnfContainerDomain: default.svc.cluster.local\n
DNS can then be used to communicate with other Rabbit containers. The FQDN for the container running on rabbit-node2 is rabbit-node2.foo.default.svc.cluster.local
.
"},{"location":"rfcs/0002/readme/#mpi-jobs","title":"MPI Jobs","text":"For MPI Jobs, these hostnames and subdomains will be slightly different due to the implementation of mpi-operator
. However, the variables will remain the same and provide a consistent way to retrieve the values.
"},{"location":"rfcs/0002/readme/#compute-to-rabbit-communication","title":"Compute-to-Rabbit Communication","text":"For Compute to Rabbit communication, the proposal is to use an open port between the nodes, so the applications could communicate using IP protocol. The port number would be assigned by the Rabbit software and included in the workflow resource's environmental variables after the Setup state (similar to workflow name & namespace). Flux should provide the port number to the compute application via an environmental variable or command line argument. The containerized application would always see the same port number using the hostPort
/containerPort
mapping functionality included in Kubernetes. To clarify, the Rabbit software is picking and managing the ports picked for hostPort
.
This requires a range of ports to be open in the firewall configuration and specified in the rabbit system configuration. The fewer the number of ports available increases the chances of a port reservation conflict that would fail a workflow.
Example port range definition in the SystemConfiguration:
apiVersion: v1\nitems:\n - apiVersion: dws.cray.hpe.com/v1alpha1\n kind: SystemConfiguration\n name: default\n namespace: default\n spec:\n containerHostPortRangeMin: 30000\n containerHostPortRangeMax: 40000\n ...\n
"},{"location":"rfcs/0002/readme/#example","title":"Example","text":"For this example, let's assume I've authored an application called foo
. This application requires Rabbit local GFS2 storage and a persistent Lustre storage volume.
Working with an administrator, my application's storage requirements and pod specification are placed in an NNF Container Profile foo
:
kind: NnfContainerProfile\napiVersion: v1alpha1\nmetadata:\n name: foo\n namespace: default\nspec:\n postRunTimeout: 300\n maxRetries: 6\n storages:\n - name: DW_JOB_foo-local-storage\n optional: false\n - name: DW_PERSISTENT_foo-persistent-storage\n optional: false\n spec:\n containers:\n - name: foo\n image: foo:latest\n command:\n - /foo\n ports:\n - name: compute\n containerPort: 80\n
Say Peter wants to use foo
as part of his job specification. Peter would submit the job with the directives below:
#DW jobdw name=my-gfs2 type=gfs2 capacity=1TB\n\n#DW persistentdw name=some-lustre\n\n#DW container name=my-foo profile=foo \\\n DW_JOB_foo-local-storage=my-gfs2 \\\n DW_PERSISTENT_foo-persistent-storage=some-lustre\n
Since the NNF Container Profile has specified that both storages are not optional (i.e. optional: false
), they must both be present in the #DW directives along with the container
directive. Alternatively, if either was marked as optional (i.e. optional: true
), it would not be required to be present in the #DW directives and therefore would not be mounted into the container.
Peter submits the job to the WLM. WLM guides the job through the workflow states:
- Proposal: Rabbit software verifies the #DW directives. For the container directive
my-foo
with profile foo
, the storage requirements listed in the NNF Container Profile are foo-local-storage
and foo-persistent-storage
. These values are correctly represented by the directive so it is valid. - Setup: Since there is a jobdw,
my-gfs2
, Rabbit software provisions this storage. -
Pre-Run:
-
Rabbit software generates a config map that corresponds to the storage requirements and runtime parameters.
kind: ConfigMap\n apiVersion: v1\n metadata:\n name: my-job-container-my-foo\n data:\n DW_JOB_foo_local_storage: mount-type=indexed-mount\n DW_PERSISTENT_foo_persistent_storage: mount-type=mount-point\n ...\n
-
Rabbit software creates a pod and duplicates the foo
pod spec in the NNF Container Profile and fills in the necessary volumes and config map.
kind: Pod\n apiVersion: v1\n metadata:\n name: my-job-container-my-foo\n template:\n metadata:\n name: foo\n namespace: default\n spec:\n containers:\n # This section unchanged from Container Profile\n - name: foo\n image: foo:latest\n command:\n - /foo\n volumeMounts:\n - name: foo-local-storage\n mountPath: <MOUNT_PATH>\n - name: foo-persistent-storage\n mountPath: <MOUNT_PATH>\n - name: nnf-config\n mountPath: /nnf/config\n ports:\n - name: compute\n hostPort: 9376 # hostport selected by Rabbit software\n containerPort: 80\n\n # volumes added by Rabbit software\n volumes:\n - name: foo-local-storage\n hostPath:\n path: /nnf/job/my-job/my-gfs2\n - name: foo-persistent-storage\n hostPath:\n path: /nnf/persistent/some-lustre\n - name: nnf-config\n configMap:\n name: my-job-container-my-foo\n\n # securityContext added by Rabbit software - values will be inherited from the workflow\n securityContext:\n runAsUser: 1000\n runAsGroup: 2000\n fsGroup: 2000\n
-
Rabbit software starts the pods on Rabbit nodes
- Post-Run
- Rabbit waits for all pods to finish (or until timeout is hit)
- If all pods are successful, Post-Run is marked as
Ready
- If any pod is not successful, Post-Run is not marked as
Ready
"},{"location":"rfcs/0002/readme/#special-note-indexed-mount-type-for-gfs2-file-systems","title":"Special Note: Indexed-Mount Type for GFS2 File Systems","text":"When using a GFS2 file system, each compute is allocated its own Rabbit volume. The Rabbit software mounts a collection of mount paths with a common prefix and an ending indexed value.
Application AUTHORS must be aware that their desired mount-point really contains a collection of directories, one for each compute node. The mount point type can be known by consulting the config map values.
If we continue the example from above, the foo
application expects the foo-local-storage path of /foo/local
to contain several directories
$ ls /foo/local/*\n\nnode-0\nnode-1\nnode-2\n...\nnode-N\n
Node positions are not absolute locations. WLM could, in theory, select 6 physical compute nodes at physical location 1, 2, 3, 5, 8, 13, which would appear as directories /node-0
through /node-5
in the container path.
Symlinks will be added to support the physical compute node names. Assuming a compute node hostname of compute-node-1
from the example above, it would link to node-0
, compute-node-2
would link to node-1
, etc.
Additionally, not all container instances could see the same number of compute nodes in an indexed-mount scenario. If 17 compute nodes are required for the job, WLM may assign 16 nodes to run one Rabbit, and 1 node to another Rabbit.
"}]}
\ No newline at end of file