Skip to content

Commit

Permalink
post v2.5.0 touchups (#209)
Browse files Browse the repository at this point in the history
Signed-off-by: Michael Mattsson <[email protected]>
  • Loading branch information
datamattsson authored Jul 31, 2024
1 parent 58bc8b7 commit 9e01305
Show file tree
Hide file tree
Showing 3 changed files with 81 additions and 3 deletions.
Original file line number Diff line number Diff line change
@@ -1 +1,73 @@
placeholder: v2.5.0
apiVersion: storage.hpe.com/v1
kind: HPECSIDriver
metadata:
name: hpecsidriver-sample
spec:
# Default values copied from <project_dir>/helm-charts/hpe-csi-driver/values.yaml
controller:
affinity: {}
labels: {}
nodeSelector: {}
resources:
limits:
cpu: 2000m
memory: 1Gi
requests:
cpu: 100m
memory: 128Mi
tolerations: []
csp:
affinity: {}
labels: {}
nodeSelector: {}
resources:
limits:
cpu: 2000m
memory: 1Gi
requests:
cpu: 100m
memory: 128Mi
tolerations: []
disable:
alletra6000: false
alletra9000: false
alletraStorageMP: false
nimble: false
primera: false
disableHostDeletion: false
disableNodeConfiguration: false
disableNodeConformance: false
disableNodeGetVolumeStats: false
disableNodeMonitor: false
imagePullPolicy: IfNotPresent
images:
csiAttacher: registry.k8s.io/sig-storage/csi-attacher:v4.6.1
csiControllerDriver: quay.io/hpestorage/csi-driver:v2.5.0
csiExtensions: quay.io/hpestorage/csi-extensions:v1.2.7
csiNodeDriver: quay.io/hpestorage/csi-driver:v2.5.0
csiNodeDriverRegistrar: registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.1
csiProvisioner: registry.k8s.io/sig-storage/csi-provisioner:v5.0.1
csiResizer: registry.k8s.io/sig-storage/csi-resizer:v1.11.1
csiSnapshotter: registry.k8s.io/sig-storage/csi-snapshotter:v8.0.1
csiVolumeGroupProvisioner: quay.io/hpestorage/volume-group-provisioner:v1.0.6
csiVolumeGroupSnapshotter: quay.io/hpestorage/volume-group-snapshotter:v1.0.6
csiVolumeMutator: quay.io/hpestorage/volume-mutator:v1.3.6
nfsProvisioner: quay.io/hpestorage/nfs-provisioner:v3.0.5
nimbleCSP: quay.io/hpestorage/alletra-6000-and-nimble-csp:v2.5.0
primera3parCSP: quay.io/hpestorage/alletra-9000-primera-and-3par-csp:v2.5.0
iscsi:
chapSecretName: ""
kubeletRootDir: /var/lib/kubelet
logLevel: info
node:
affinity: {}
labels: {}
nodeSelector: {}
resources:
limits:
cpu: 2000m
memory: 1Gi
requests:
cpu: 100m
memory: 128Mi
tolerations: []
2 changes: 1 addition & 1 deletion docs/csi_driver/using.md
Original file line number Diff line number Diff line change
Expand Up @@ -934,7 +934,7 @@ Next, provision "RWO" or "RWX" claims from the "hpe-nfs-servers" `StorageClass`.
These are some common issues and gotchas that are useful to know about when planning to use the NFS Server Provisioner.

- The current tested and supported limit for the NFS Server Provisioner is 32 NFS servers per Kubernetes worker node.
- The two `StorageClass` parameters "nfsResourceLimitsCpuM" and "nfsResourceLimitsMemoryMi" control how much CPU and memory it may consume. Tests show that the NFS server consumes about 150MiB at instantiation and 2GiB is the recommended minimum for most workloads. The NFS server `Pod` is by default limited to 2GiB or memory and 1000 milli CPU.
- The two `StorageClass` parameters "nfsResourceLimitsCpuM" and "nfsResourceLimitsMemoryMi" control how much CPU and memory it may consume. Tests show that the NFS server consumes about 150MiB at instantiation and 2GiB is the recommended minimum for most workloads. The NFS server `Pod` is by default limited to 2GiB of memory and 1000 milli CPU.
- The NFS `PVC` can **NOT** be expanded. If more capacity is needed, expand the "ReadWriteOnce" `PVC` backing the NFS Server Provisioner. This will result in inaccurate space reporting.
- Due to the fact that the NFS Server Provisioner deploys a number of different resources on the hosting cluster per `PVC`, provisioning times may differ greatly between clusters. On an idle cluster with the NFS Server Provisioning image cached, less than 30 seconds is the most common sighting but it may exceed 30 seconds which may trigger warnings on the requesting `PVC`. This is normal behavior.
- The HPE CSI Driver includes a Pod Monitor to delete `Pods` that have become unavailable due to the Pod status changing to `NodeLost` or a node becoming unreachable that the `Pod` runs on. By default the Pod Monitor only watches the NFS Server Provisioner `Deployments`. It may be used for any `Deployment`. See [Pod Monitor](monitor.md) on how to use it, especially the [limitations](monitor.md#limitations).
Expand Down
8 changes: 7 additions & 1 deletion docs/partners/redhat_openshift/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -345,12 +345,18 @@ The VM is now ready to be migrated.

In the event on older version of the Operator needs to be installed, the bundle can be installed directly by [installing the Operator SDK](https://console.redhat.com/openshift/downloads). Make sure a recent version of the `operator-sdk` binary is available and that no HPE CSI Driver is currently installed on the cluster.

Install a specific version (v2.4.2 in this case):
Install a specific version prior and including v2.4.2:

```text
operator-sdk run bundle --timeout 5m -n hpe-storage quay.io/hpestorage/csi-driver-operator-bundle:v2.4.2
```

Install a specific version after and including v2.5.0:

```text
operator-sdk run bundle --timeout 5m -n hpe-storage quay.io/hpestorage/csi-driver-operator-bundle-ocp:v2.5.0
```

!!! important
Once the Operator is installed, a `HPECSIDriver` instance needs to be created. Follow the steps using the [web console](#openshift_web_console) or the [CLI](#openshift_cli) to create an instance.

Expand Down

0 comments on commit 9e01305

Please sign in to comment.