Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[PWX-36217] feat: cloudbackup restore #2442

Merged
merged 2 commits into from
Jun 7, 2024
Merged

Conversation

Glitchfix
Copy link
Contributor

@Glitchfix Glitchfix commented May 9, 2024

What this PR does / why we need it:
This PR introduces the ability to restore cloud snapshots in the OSD CSI driver. It also includes unit tests for this new feature.

  • Feature: Restore cloud snapshot. This feature allows users to restore cloud snapshots.
  • Tests: Unit tests for the restore cloud snapshot feature to ensure it works as expected.

Which issue(s) this PR fixes
PWX-36217

Testing Notes
Testing results

stc:

  apiVersion: core.libopenstorage.org/v1
  kind: StorageCluster
  metadata:
    annotations:
      portworx.io/install-source: https://install.portworx.com/?operator=true&mc=false&kbver=1.25.0&ns=portworx&b=true&iop=6&r=17001&c=px-cluster-670d0151-90c8-47fb-bea5-82d43d8c0395&osft=true&stork=true&csi=true&mon=true&tel=true&st=k8s&promop=true
      portworx.io/is-openshift: "true"
      portworx.io/preflight-check: "false"
    finalizers:
    - operator.libopenstorage.org/delete
    generation: 2
    name: pwx-35964
    namespace: kube-system
  spec:
    autopilot:
      enabled: true
      providers:
      - name: default
        params:
          url: http://px-prometheus:9090
        type: prometheus
    csi:
      enabled: true
      installSnapshotController: true
    deleteStrategy:
      type: UninstallAndWipe
    env:
    - name: LICENSE_ENDPOINT
      value: UAT
    - name: ZUORA_ENDPOINT
      value: https://rest.apisandbox.zuora.com
    - name: REGISTRY_USER
      value: ***
    - name: REGISTRY_PASS
      value: d*****
    - name: PX_IMAGE
      value: docker.io/cnbuautomation800/px-base-enterprise:pwx36217
    image: portworx/oci-monitor:3.1.0
    imagePullPolicy: Always
    kvdb:
      internal: true
    monitoring:
      prometheus:
        enabled: true
        exportMetrics: true
        resources: {}
      telemetry:
        enabled: true
    placement:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
          - matchExpressions:
            - key: px/enabled
              operator: NotIn
              values:
              - "false"
            - key: node-role.kubernetes.io/infra
              operator: DoesNotExist
            - key: node-role.kubernetes.io/master
              operator: DoesNotExist
            - key: node-role.kubernetes.io/control-plane
              operator: DoesNotExist
          - matchExpressions:
            - key: px/enabled
              operator: NotIn
              values:
              - "false"
            - key: node-role.kubernetes.io/infra
              operator: DoesNotExist
            - key: node-role.kubernetes.io/master
              operator: Exists
            - key: node-role.kubernetes.io/worker
              operator: Exists
          - matchExpressions:
            - key: px/enabled
              operator: NotIn
              values:
              - "false"
            - key: node-role.kubernetes.io/infra
              operator: DoesNotExist
            - key: node-role.kubernetes.io/control-plane
              operator: Exists
            - key: node-role.kubernetes.io/worker
              operator: Exists
    revisionHistoryLimit: 10
    runtimeOptions:
      metering_interval_mins: "2"
    secretsProvider: k8s
    startPort: 17001
    storage:
      useAll: true
    stork:
      args:
        webhook-controller: "true"
      enabled: true
    updateStrategy:
      rollingUpdate:
        maxUnavailable: 1
      type: RollingUpdate
    version: 3.1.0
  status:
    clusterName: pwx-35964
    clusterUid: 6217e2fd-11b0-41e8-9448-4ea062487dc3
    conditions:
    - lastTransitionTime: "2024-04-05T08:32:14Z"
      message: Portworx installation completed on 3 nodes
      source: Portworx
      status: Completed
      type: Install
    - lastTransitionTime: "2024-04-05T08:31:52Z"
      source: Portworx
      status: Online
      type: RuntimeState
    desiredImages:
      autopilot: portworx/autopilot:1.3.14
      csiAttacher: docker.io/openstorage/csi-attacher:v1.2.1-1
      csiNodeDriverRegistrar: registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.9.0
      csiProvisioner: registry.k8s.io/sig-storage/csi-provisioner:v3.6.1
      csiResizer: registry.k8s.io/sig-storage/csi-resizer:v1.9.1
      csiSnapshotController: registry.k8s.io/sig-storage/snapshot-controller:v6.3.1
      csiSnapshotter: registry.k8s.io/sig-storage/csi-snapshotter:v6.3.1
      dynamicPlugin: portworx/portworx-dynamic-plugin:1.1.0
      dynamicPluginProxy: nginxinc/nginx-unprivileged:1.25
      kubeControllerManager: registry.k8s.io/kube-controller-manager-amd64:v1.29.0
      kubeScheduler: registry.k8s.io/kube-scheduler-amd64:v1.29.0
      logUploader: purestorage/log-upload:px-1.1.8
      metricsCollector: purestorage/realtime-metrics:1.0.23
      pause: registry.k8s.io/pause:3.1
      prometheus: quay.io/prometheus/prometheus:v2.48.1
      prometheusConfigReloader: quay.io/prometheus-operator/prometheus-config-reloader:v0.70.0
      prometheusOperator: quay.io/prometheus-operator/prometheus-operator:v0.70.0
      stork: openstorage/stork:23.11.0
      telemetry: purestorage/ccm-go:1.2.2
      telemetryProxy: purestorage/telemetry-envoy:1.1.11
    phase: Running
    storage: {}
    version: 3.1.0

storage class:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
    name: px-slow-csi
provisioner: pxd.portworx.com
parameters:
    repl: "1"
allowVolumeExpansion: true

PVC:


kind: PersistentVolumeClaim
apiVersion: v1
metadata:
    name: px-fast-pvc-csi
    namespace: t1
spec:
    storageClassName: px-slow-csi
    accessModes:
        - ReadWriteOnce
    resources:
        requests:
            storage: 1Gi

Ubuntu pod:

apiVersion: v1
kind: Pod
metadata:
  name: ubuntu-csi
  namespace: t1
  labels:
    app: ubuntu
  annotations:
        stork.libopenstorage.org/preferLocalNodeOnly: "true"
spec:
  schedulerName: stork
  containers:
  - image: ubuntu
    command:
      - "sleep"
      - "604800"
    imagePullPolicy: IfNotPresent
    name: ubuntu
    volumeMounts:
        - name: ubunut-persistent-storage
          mountPath: /var/sanjain
  volumes:
    - name: ubunut-persistent-storage
      persistentVolumeClaim:
        claimName: px-fast-pvc-csi

Create Credentials using pxctl:

pxctl credentials create --provider s3 --s3-access-key **** --s3-secret-key **** --s3-region us-east-1 --s3-endpoint https://minio.pwx.dev.purestorage.com/ --s3-storage-class STANDARD sanjain
Credentials created successfully, ****
[root@ip-10-13-170-211 /]# pxctl credentials validate sanjain
Credential validated successfully

Creating random data:

root@ubuntu-csi:/# cd /var/sanjain/
root@ubuntu-csi:/var/sanjain# echo "hi, my name is k8" > random.txt
root@ubuntu-csi:/var/sanjain# 

Volume snapshot class

apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
  name: px-csi-snapclass-sanjain
  namespace: kube-system
  annotations:
    snapshot.storage.kubernetes.io/is-default-class: "true"
driver: pxd.portworx.com
deletionPolicy: Delete
parameters: ## Specify only if px-security is ENABLED
  csi.storage.k8s.io/snapshotter-secret-name: 5da986fe-37cd-4aad-a63a-05ef1a944af8  
  csi.storage.k8s.io/snapshotter-secret-namespace: kube-system 
  csi.openstorage.org/snapshot-type: cloud

Volumne snapshot:

apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
  name: px-csi-snapshot-3
  namespace: t1
spec:
  volumeSnapshotClassName: px-csi-snapclass-sanjain
  source:
    persistentVolumeClaimName: px-fast-pvc-csi

Restore PVC:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: px-csi-pvc-restored-3
  namespace: t1
spec:
  storageClassName: px-slow-csi
  dataSource:
    name: px-csi-snapshot-3
    kind: VolumeSnapshot
    apiGroup: snapshot.storage.k8s.io
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

Pod with restored PVC:

apiVersion: v1
kind: Pod
metadata:
  name: ubuntu-csi-res
  namespace: t1
  labels:
    app: ubuntu
  annotations:
        stork.libopenstorage.org/preferLocalNodeOnly: "true"
spec:
  schedulerName: stork
  containers:
  - image: ubuntu
    command:
      - "sleep"
      - "604800"
    imagePullPolicy: IfNotPresent
    name: ubuntu
    volumeMounts:
        - name: ubunut-persistent-storage
          mountPath: /var/sanjain
  volumes:
    - name: ubunut-persistent-storage
      persistentVolumeClaim:
        claimName: px-csi-pvc-restored-3

Checking data restored:

root@ubuntu-csi-res:/# cd /var/sanjain            
root@ubuntu-csi-res:/var/sanjain# cat random.txt 
hi, my name is k8
root@ubuntu-csi-res:/var/sanjain#

@Glitchfix Glitchfix changed the title feat/cloudbackup restore [PWX-36217] feat: cloudbackup restore May 9, 2024
@Glitchfix Glitchfix force-pushed the feat/cloudbackup-restore branch 3 times, most recently from 9b224da to 32a4288 Compare May 13, 2024 09:20
@Glitchfix Glitchfix requested a review from vinayakshnd May 31, 2024 05:53
Copy link
Contributor

@pnookala-px pnookala-px left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please test a 50GB restore and add the snapshots for how the under restore volume describe looks like.

Copy link

github-actions bot commented Jun 7, 2024

This PR is stale because it has been in review for 3 days with no activity.

Glitchfix added 2 commits June 7, 2024 11:31
Signed-off-by: Shivanjan Chakravorty <[email protected]>
@Glitchfix Glitchfix force-pushed the feat/cloudbackup-restore branch from 32a4288 to d32c997 Compare June 7, 2024 06:01
@Glitchfix Glitchfix merged commit c0c3096 into master Jun 7, 2024
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants