Skip to content

Commit

Permalink
Added new documentation for HPE COSI Driver. (#225)
Browse files Browse the repository at this point in the history
* Added new documentation for HPE COSI Driver.

Signed-off-by: Gauri Krishnan <[email protected]>

* Separated code output by brief descriptive text in diagnostics.md and using.md.
Created a file my-cosi-app.yaml under cosi_driver/examples/using so that the file is downloadable by the user.
Added support statement for HPE Alletra Storage MP X10000 under legal section.

Signed-off-by: Gauri Krishnan <[email protected]>

* Updated steps to locate the DSCC zone FQDN, with supported FQDNs in deployment.md.
Updated diagnostics.md with tip to check network connections to troubleshoot errors.

Signed-off-by: Gauri Krishnan <[email protected]>

* Included TOC for support page, made separate tables for CSP and OSP support numbers and removed unnecessary fct_label.

Signed-off-by: Gauri Krishnan <[email protected]>

* Removed abbreviations of Data Services Cloud Console.

Signed-off-by: Gauri Krishnan <[email protected]>

* Updated deployment.md

Renamed header and corrected typo.

* Primary branch of hpe-storage/cosi-driver has been renamed from master to main.
Updated links accordingly.

Signed-off-by: Gauri Krishnan <[email protected]>

* Updated link in diagnostics.md to point to the values.yaml of chart version: 1.0.0.

---------

Signed-off-by: Gauri Krishnan <[email protected]>
  • Loading branch information
gauriKrishnan authored Jan 30, 2025
1 parent 88ac1c4 commit b7b334e
Show file tree
Hide file tree
Showing 8 changed files with 763 additions and 4 deletions.
101 changes: 101 additions & 0 deletions docs/cosi_driver/deployment.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
# Overview

The HPE COSI Driver for Kubernetes is deployed by using industry standard means, a Helm chart.

[TOC]

## Delivery Vehicles

Helm is currently the only supported delivery vehicle for the deployment of the HPE COSI Driver.

### Helm

[Helm](https://helm.sh) is the package manager for Kubernetes. Software is being delivered in a format designated as a "chart". Helm is a [standalone CLI](https://helm.sh/docs/intro/install/) that interacts with the Kubernetes API server using your `KUBECONFIG` file.

The official [Helm chart](https://github.com/hpe-storage/co-deployments/tree/master/helm/charts/hpe-cosi-driver) for the HPE COSI Driver for Kubernetes is hosted on [Artifact Hub](https://artifacthub.io/packages/helm/hpe-storage/hpe-cosi-driver). The chart supports Helm 3 from version 1.0.0 of the HPE COSI Driver. In an effort to avoid duplicate documentation, please see the chart for instructions on how to deploy the COSI driver using Helm.

- Go to the chart on [Artifact Hub](https://artifacthub.io/packages/helm/hpe-storage/hpe-cosi-driver).

## Add an HPE Storage Backend

Once the COSI driver is deployed, you must create a `Secret` with the following details before you can use the [COSI API resources](using.md).

### Secret Parameters

All parameters are mandatory and described below.

| Parameter | Description |
| ------------------- | ------------|
| accessKey | The access key of the S3 user with bucket creation, bucket-tagging and deletion permissions.
| secretKey | The secret key for the S3 user who has bucket creation, bucket-tagging and deletion permissions.
| endpoint | The S3 frontend network DNS subdomains address of the backend object storage system; that is, an HPE Alletra Storage MP X10000 system.
| glcpUserClientId | The HPE Green Lake API client ID.
| glcpUserSecretKey | The HPE Green Lake API client secret.
| dsccZone | The fully qualified domain name (FQDN) of the HPE Data Services Cloud Console zone.
| clusterSerialNumber | The backend storage system cluster serial number.

!!! note
The Kubernetes compute nodes where the HPE COSI Driver is allowed to run need to be able to access the Data Services Cloud Console zone specified.

Example `Secret` manifest named "hpe-object-backend.yaml":

```yaml fct_label="HPE COSI Driver v1.0.0"
apiVersion: v1
kind: Secret
metadata:
name: hpe-object-backend
namespace: default
stringData:
accessKey: testuser
secretKey: testkey
endpoint: http://192.168.1.100:8080
glcpUserClientId: 00000000-0000-0000-0000-000000000000
glcpUserSecretKey: 00000000000000000000000000000000
dsccZone: us1.data.cloud.hpe.com
clusterSerialNumber: 0000000000
```
Create the `Secret`.

```text
kubectl create -f hpe-object-backend.yaml
```

!!! tip "See Also"
The COSI source code repository contains a parameterized script that can assist in creating a correctly formatted `Secret`. See [github.com/hpe-storage/cosi-driver/scripts/cosi_secret](https://github.com/hpe-storage/cosi-driver/tree/main/scripts/cosi_secret) for more details.

### Creating and Locating Resources

1. To create the S3 user:
* Follow the steps in the HPE documentation to [create an access policy](https://support.hpe.com/hpesc/docDisplay?docId=sd00004219en_us&page=objstr_access_policies_create_dscc.html).
- Choose _All Buckets_.
- Add custom actions for the policy: `CreateBucket`, `DeleteBucket`, `PutBucketTagging`.
* To create the user, refer to the [HPE documentation](https://support.hpe.com/hpesc/docDisplay?docId=sd00004219en_us&page=objstr_users_create_dscc.html) for this purpose and select the access policy created in the previous step.
- Save the user name and password. These will be used as the S3 access key and S3 secret key respectively in the COSI secret.
2. To create the HPE Green Lake API client ID and secret, refer to the following [HPE documentation](https://support.hpe.com/hpesc/public/docDisplay?docId=a00120892en_us&page=GUID-23E6EE78-AAB7-472C-8D16-7169938BE628.html).
3. To locate the Data Services Cloud Console zone FQDN:
* Log into HPE Data Services Cloud Console.
* On the _Services_ page, click _My Services_ to view all services available in your workspace.
* Select the service that your HPE Alletra Storage MP X10000 device is assigned to and click _Launch_.
* After the service is launched, save the value of the URL from the browser. E.g.: `https://console-us1.data.cloud.hpe.com`.
* After dropping the prefix `https://console-`, the Data Services Cloud Console zone FQDN value to be used in the `Secret` should have the following format: `us1.data.cloud.hpe.com`.
* Supported Data Services Cloud Console zone FQDNs as of January 2025 are:
- us1.data.cloud.hpe.com
- jp1.data.cloud.hpe.com
- eu1.data.cloud.hpe.com
- uk1.data.cloud.hpe.com
4. To locate the S3 endpoint:
* Log into HPE Data Services Cloud Console.
* Launch the service that your HPE Alletra Storage MP X10000 device is assigned to.
* Select _Data Ops Manager_.
* From the menu on the left, select _Systems_. From the list click on the name of the system you want to use for COSI operations.
* Click on the _Networking_ tab. Under the _Frontend Network_ section, save the value of the _Network DNS Subdomains_ field.
* The S3 endpoint can be constructed from the _Network DNS Subdomains_ value by using the format: `http://<Network DNS Subdomains>`.
5. To locate the cluster serial number of the HPE Alletra Storage MP X10000 system, refer to the following [HPE documentation](https://support.hpe.com/hpesc/public/docDisplay?docId=a00120892en_us&page=GUID-616CE4D4-C31A-4BFE-8F41-887C2B0B9046.html).

!!! tip
In a real world scenario it's more practical to name the `Secret` something that makes sense for the organization. It could be the hostname of the backend or the role it carries; i.e., "hpe-alletra-sanjose-prod".

## Next Steps

Next you need to create [a BucketClass](using.md#configure_a_bucketclass).
174 changes: 174 additions & 0 deletions docs/cosi_driver/diagnostics.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,174 @@
# Introduction

It's recommended to familiarize yourself with inspecting workloads on Kubernetes. This [cheat sheet](https://kubernetes.io/docs/reference/kubectl/cheatsheet/#interacting-with-running-pods) is very useful, and you should have it readily available.

## Sanity Checks

Once the COSI driver has been deployed using Helm, the list of pods shown below is representative of what a healthy system would look like after installation. If any of the workload deployments lists anything but `Running`, proceed to inspect the logs for the problematic workload.

```text
kubectl get pods
NAME READY STATUS RESTARTS AGE
hpe-cosi-provisioner-559b458788-5jc56 2/2 Running 0 8d
objectstorage-controller-7dff56f8fc-r5tdq 1/1 Running 0 8d
```

Once the COSI API objects `BucketClaim`, `Bucket` and `BucketAccess` have been created in the cluster, use the `kubectl describe` command to check the status and events, which will have information useful to help diagnose any issue. If any of the resources' statuses shows anything but `true` or if there are any warning events, inspect the logs of the COSI driver and sidecar.

Describing a `BucketClaim`.

```text fct_label="BucketClaim"
kubectl describe bucketclaim my-first-bucketclaim
Name: my-first-bucketclaim
Namespace: default
Labels: <none>
Annotations: <none>
API Version: objectstorage.k8s.io/v1alpha1
Kind: BucketClaim
Metadata:
Creation Timestamp: 2024-09-17T04:34:55Z
Finalizers:
cosi.objectstorage.k8s.io/bucketclaim-protection
Generation: 1
Resource Version: 110670
UID: 99dde004-4f8d-4a20-900b-e5d61e3facb9
Spec:
Bucket Class Name: hpe-standard-object
Protocols:
s3
Status:
Bucket Name: bc199dde004-4f8d-4a20-900b-e5d61e3facb9
Bucket Ready: true
Events: <none>
```

Describing a `Bucket`.

```text fct_label="Bucket"
kubectl describe bucket bc199dde004-4f8d-4a20-900b-e5d61e3facb9
Name: bc199dde004-4f8d-4a20-900b-e5d61e3facb9
Namespace:
Labels: <none>
Annotations: <none>
API Version: objectstorage.k8s.io/v1alpha1
Kind: Bucket
Metadata:
Creation Timestamp: 2024-09-17T04:34:55Z
Finalizers:
cosi.objectstorage.k8s.io/bucket-protection
cosi.objectstorage.k8s.io/bucketaccess-bucket-protection
Generation: 1
Resource Version: 111014
UID: e7fb69f6-c20e-4abf-a522-f1ae765b3b6a
Spec:
Bucket Claim:
Name: my-first-bucketclaim
Namespace: default
UID: 99dde004-4f8d-4a20-900b-e5d61e3facb9
Bucket Class Name: hpe-standard-object
Deletion Policy: Delete
Driver Name: cosi.hpe.com
Parameters:
Bucket Tags: mytag=myvalue, mytag2=, mytag3=myvalue3,
Cosi User Secret Name: hpe-object-backend
Cosi User Secret Namespace: default
Protocols:
s3
Status:
Bucket ID: bc199dde004-4f8d-4a20-900b-e5d61e3facb9
Bucket Ready: true
Events: <none>
```

Describing a `BucketAccess`.

```text fct_label="BucketAccess"
kubectl describe bucketaccess hpe-standard-access
Name: hpe-standard-access
Namespace: default
Labels: <none>
Annotations: <none>
API Version: objectstorage.k8s.io/v1alpha1
Kind: BucketAccess
Metadata:
Creation Timestamp: 2024-09-17T04:38:00Z
Finalizers:
cosi.objectstorage.k8s.io/bucketaccess-protection
Generation: 1
Resource Version: 111017
UID: 37fd517f-27ac-45fa-b369-57e645967366
Spec:
Bucket Access Class Name: hpe-standard-access
Bucket Claim Name: my-first-bucket-claim
Credentials Secret Name: my-first-access-secret
Protocol: s3
Status:
Access Granted: true
Account ID: ba-37fd517f-27ac-45fa-b369-57e645967366
Events: <none>
```

!!! tip
It is useful to check that the network connection between the Kubernetes cluster hosting the HPE COSI driver and HPE Data Services Cloud Console, as well as between the HPE Alletra Storage MP X 10000 system and HPE Data Services Cloud Console is intact. A poor network connection is a common cause of failure while creating or deleting `Bucket` and `BucketAccess` resources.

## Logging

Log files associated with the HPE COSI Driver posts data to the standard output stream and can be accessed using options under [kubectl logs](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_logs/).

If the logs need to be retained for long term, use a standard logging solution for Kubernetes, such as Fluentd. Alternatively, it's advised to use the log collector script, at regular intervals to preserve logs.

### COSI Driver Logs

```text
kubectl logs -f deploy/hpe-objectstorage-provisioner -c hpe-cosi-driver
```

### COSI Sidecar Logs

```text
kubectl logs -f deploy/hpe-objectstorage-provisioner -c hpe-cosi-provisioner-sidecar
```

### COSI Controller Logs

```text
kubectl logs -f deploy/objectstorage-controller
```

### Log Level of Sidecar

You can control the log level for the COSI Sidecar using the `.containers.sideCar.verbosityLevel` field in [`values.yaml`](https://github.com/hpe-storage/co-deployments/blob/master/helm/values/cosi-driver/v1.0.0/values.yaml) of the Helm chart. The values are generally small positive integers.

```yaml
containers:
sideCar:
# Verbosity level: Small postive integer values are generally recommended.
# Ref.: https://pkg.go.dev/k8s.io/klog/v2#V
# containers.sideCar.verbosityLevel -- Specifies the verbosity of the logs that will be printed by the sidecar container
verbosityLevel: 5
```
### Log Collector
The log collector script `hpe-logcollector.sh` can be used to collect the logs from any node that has `kubectl` access to the cluster. Please see the script's associated [documentation](https://github.com/hpe-storage/cosi-driver/tree/main/scripts/log_collection) for more details on usage and troubleshooting.

Download the script and provide execute permissions:

```text
wget https://raw.githubusercontent.com/hpe-storage/cosi-driver/main/scripts/log_collection/hpe-logcollector.sh
chmod +x hpe-logcollector.sh
```

Usage:

```text
./hpe-logcollector.sh -h
Collect HPE storage diagnostic logs using kubectl.
Usage:
hpe-logcollector.sh [-h|--help] [-n|--namespace NAMESPACE]
Options:
-h|--help Print this usage text
-n|--namespace NAMESPACE Collect logs from HPE COSI Driver Deployment in Namespace
NAMESPACE (default: default)
```
89 changes: 89 additions & 0 deletions docs/cosi_driver/examples/using/my-cosi-app.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,89 @@
apiVersion: batch/v1
kind: Job
metadata:
name: my-cosi-app
spec:
template:
spec:
restartPolicy: Never
initContainers:
- name: install-boto3
image: python:3.9
command: ["sh", "-c", "pip install boto3 -t /app && cp /scripts/sample_pod.py /app"]
env:
- name: http_proxy
value: ""
- name: https_proxy
value: ""
volumeMounts:
- name: app-volume
mountPath: /app
- name: script-volume
mountPath: /scripts
containers:
- name: my-cosi-app
image: python:3.9
command: ["python", "-u", "/app/sample_pod.py"]
volumeMounts:
- name: app-volume
mountPath: /app
- name: secret-volume
mountPath: /data/cosi
env:
- name: PYTHONPATH
value: "/app"
volumes:
- name: app-volume
emptyDir: {}
- name: script-volume
configMap:
name: test-object-script
- name: secret-volume
secret:
secretName: my-first-access-secret
---
apiVersion: v1
kind: ConfigMap
metadata:
name: test-object-script
data:
sample_pod.py: |
#!/usr/bin/env python3
import json
import boto3
import sys
try:
# load the bucket info
print("Loading bucket info...")
with open('/data/cosi/BucketInfo', 'r') as f:
bi = json.load(f)
# create an s3 client session
print("Creating S3 client session...")
session = boto3.Session(
aws_access_key_id=bi['spec']['secretS3']['accessKeyID'],
aws_secret_access_key=bi['spec']['secretS3']['accessSecretKey'],
)
s3 = session.resource(
bi['spec']['protocols'][0], # set protocol
endpoint_url=bi['spec']['secretS3']['endpoint'], # set endpoint
)
# create an object
print("Creating object in S3 bucket...")
s3.Object(bi['spec']['bucketName'], 'sample.txt').put(Body='Hello, World!')
print('Object created')
# read the object
print('Testing the get-object operation on bucket: ' + bi['spec']['bucketName'] + ', object body: ' + s3.Object(bi['spec']['bucketName'], 'sample.txt').get()['Body'].read().decode('utf-8'))
# Exit gracefully with zero exit code
print("Script completed successfully.")
sys.exit(0)
except Exception as e:
print(f"An error occurred: {e}", file=sys.stderr)
sys.exit(1)
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading

0 comments on commit b7b334e

Please sign in to comment.