Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PWX-35127: add volume spec to drive resource #2418

Merged
merged 4 commits into from
Feb 23, 2024

Conversation

sulakshm
Copy link
Contributor

@sulakshm sulakshm commented Feb 20, 2024

What this PR does / why we need it:

linked PR: https://github.com/pure-px/porx/pull/13009

Keep the original cloud drive spec for volume creation part of the managed volume resource. This is needed to help pass px specific labels between cloud and px later.

This is part of a larger effort to create pools with enough drives at the beginning and only support capacity expansion through pool resize without needing add drive.
Gave below logs as to how a single drive add can inturn create multiple disk devices.
This is px flex pool feature under work.

pxctl sv drive add -s "size=10,px-flexpool=1,px-max-thin-pool-size=40"
So part of the drive spec, additional px specific attributes also get passed.
These need to be saved part of the resource, hence need this state add PR.

Which issue(s) this PR fixes (optional)
Closes # PWX-35127

Testing Notes
Add testing output or passing unit test output here.

Special notes for your reviewer:
Add any notes for the reviewer here.

root@lsundararajan-47-1:~# pxctl status
WARNING: CLI and PX Daemon version mismatch
CLI build version       : 4.0.0.0-fdd321f
PX  build version       : 4.0.0.0-d724bf0
Status: PX is operational
Telemetry: Disabled or Unhealthy
Metering: Disabled or Unhealthy
License: Trial (expires in 31 days)
Node ID: e24cc370-b7d3-497c-b022-c8586cb56f2b
        IP: 10.13.10.147
        Local Storage Pool: 2 pools
        POOL    IO_PRIORITY     RAID_LEVEL      USABLE  USED    STATUS  ZONE    REGION
        0       HIGH            raid0           86 GiB  35 MiB  Online  default default
        1       HIGH            raid0           86 GiB  35 MiB  Online  default default
        Local Storage Devices: 3 devices
        Device  Path            Media Type              Size            Last-Scan
        0:0     /dev/sdg        STORAGE_MEDIUM_SSD      100 GiB         21 Feb 24 09:19 UTC
        1:0     /dev/sdh        STORAGE_MEDIUM_SSD      50 GiB          21 Feb 24 09:19 UTC
        1:1     /dev/sdi        STORAGE_MEDIUM_SSD      50 GiB          21 Feb 24 09:19 UTC
        total                   -                       200 GiB
        Cache Devices:
         * No cache devices
        Journal Device:
        1       /dev/sdf1       STORAGE_MEDIUM_SSD      3.0 GiB
        Metadata Device:
        1       /dev/sdf2       STORAGE_MEDIUM_SSD      61 GiB
Cluster Summary
        Cluster ID: local-px-int
        Cluster UUID: 5a1743e3-c35c-4d6e-9e87-9dc88a723606
        Scheduler: kubernetes
        Total Nodes: 3 node(s) with storage (3 online)
        IP              ID                                      SchedulerNodeName       Auth            StorageNode     Used    Capacity        Status  StorageStatus   Version         Kernel                  OS
        10.13.10.147    e24cc370-b7d3-497c-b022-c8586cb56f2b    lsundararajan-47-1      Disabled        Yes(PX-StoreV2) 70 MiB  172 GiB         Online  Up (This node)  4.0.0.0-d724bf0 5.15.0-67-generic       Ubuntu 22.04.2 LTS
        10.13.10.150    aee1284f-6eb0-47ed-acb7-7c50132a9e60    lsundararajan-47-3      Disabled        Yes(PX-StoreV2) 70 MiB  172 GiB         Online  Up              4.0.0.0-7aa9618 5.15.0-67-generic       Ubuntu 22.04.2 LTS
        10.13.10.59     97e01a1c-f53a-43ae-bb26-8145e96ee205    lsundararajan-47-2      Disabled        Yes(PX-StoreV2) 103 MiB 235 GiB         Online  Up              4.0.0.0-7aa9618 5.15.0-67-generic       Ubuntu 22.04.2 LTS
        Warnings:
                 WARNING: Cluster consists of nodes with different PX versions.
Global Storage Pool
        Total Used      :  244 MiB
        Total Capacity  :  579 GiB
root@lsundararajan-47-1:~#
root@lsundararajan-47-1:~#
root@lsundararajan-47-1:~# pxctl sv drive add -s "size=10,px-flexpool=1,px-max-thin-pool-size=40"
Adding drives may make storage offline for the duration of the operation.
Are you sure you want to proceed ? (Y/N): Y
Drive add done
root@lsundararajan-47-1:~# vgs pwx2
  VG   #PV #LV #SN Attr   VSize   VFree
  pwx2   1   3   0 wz--n- <29.97g    0
root@lsundararajan-47-1:~# lvs -o chunk_size pwx2/pxpool
  Chunk
  192.00k
root@lsundararajan-47-1:~# pxctl status
WARNING: CLI and PX Daemon version mismatch
CLI build version       : 4.0.0.0-fdd321f
PX  build version       : 4.0.0.0-d724bf0
Status: PX is operational
Telemetry: Disabled or Unhealthy
Metering: Disabled or Unhealthy
License: Trial (expires in 31 days)
Node ID: e24cc370-b7d3-497c-b022-c8586cb56f2b
        IP: 10.13.10.147
        Local Storage Pool: 3 pools
        POOL    IO_PRIORITY     RAID_LEVEL      USABLE  USED    STATUS  ZONE    REGION
        0       HIGH            raid0           86 GiB  35 MiB  Online  default default
        1       HIGH            raid0           86 GiB  35 MiB  Online  default default
        2       HIGH            raid0           23 GiB  35 MiB  Online  default default
        Local Storage Devices: 6 devices
        Device  Path            Media Type              Size            Last-Scan
        0:0     /dev/sdg        STORAGE_MEDIUM_SSD      100 GiB         21 Feb 24 09:21 UTC
        1:0     /dev/sdh        STORAGE_MEDIUM_SSD      50 GiB          21 Feb 24 09:21 UTC
        1:1     /dev/sdi        STORAGE_MEDIUM_SSD      50 GiB          21 Feb 24 09:21 UTC
        2:0     /dev/sdj        STORAGE_MEDIUM_SSD      10 GiB          21 Feb 24 09:21 UTC
        2:1     /dev/sdl        STORAGE_MEDIUM_SSD      10 GiB          21 Feb 24 09:21 UTC
        2:2     /dev/sdk        STORAGE_MEDIUM_SSD      10 GiB          21 Feb 24 09:21 UTC
        total                   -                       230 GiB
        Cache Devices:
         * No cache devices
        Journal Device:
        1       /dev/sdf1       STORAGE_MEDIUM_SSD      3.0 GiB
        Metadata Device:
        1       /dev/sdf2       STORAGE_MEDIUM_SSD      61 GiB
Cluster Summary
        Cluster ID: local-px-int
        Cluster UUID: 5a1743e3-c35c-4d6e-9e87-9dc88a723606
        Scheduler: kubernetes
        Total Nodes: 3 node(s) with storage (3 online)
        IP              ID                                      SchedulerNodeName       Auth            StorageNode     Used    Capacity        Status  StorageStatus   Version         Kernel                  OS
        10.13.10.147    e24cc370-b7d3-497c-b022-c8586cb56f2b    lsundararajan-47-1      Disabled        Yes(PX-StoreV2) 106 MiB 195 GiB         Online  Up (This node)  4.0.0.0-d724bf0 5.15.0-67-generic       Ubuntu 22.04.2 LTS
        10.13.10.150    aee1284f-6eb0-47ed-acb7-7c50132a9e60    lsundararajan-47-3      Disabled        Yes(PX-StoreV2) 70 MiB  172 GiB         Online  Up              4.0.0.0-7aa9618 5.15.0-67-generic       Ubuntu 22.04.2 LTS
        10.13.10.59     97e01a1c-f53a-43ae-bb26-8145e96ee205    lsundararajan-47-2      Disabled        Yes(PX-StoreV2) 103 MiB 235 GiB         Online  Up              4.0.0.0-7aa9618 5.15.0-67-generic       Ubuntu 22.04.2 LTS
        Warnings:
                 WARNING: Cluster consists of nodes with different PX versions.
Global Storage Pool
        Total Used      :  279 MiB
        Total Capacity  :  602 GiB
root@lsundararajan-47-1:~# pxctl sv pool show
PX drive configuration:
Pool ID: 0
        Type:  PX-StoreV2
        UUID:  86fce51d-ea8f-4417-9a48-d69497aeeb88
        IO Priority:  HIGH
        Labels:  medium=STORAGE_MEDIUM_SSD,topology.portworx.io/datacenter=CNBU,kubernetes.io/os=linux,beta.kubernetes.io/arch=amd64,iopriority=HIGH,kubernetes.io/arch=amd64,topology.portworx.io/node=422dad16-4821-92dc-81fc-e1a6f8021d22,kubernetes.io/hostname=lsundararajan-47-1,topology.portworx.io/hypervisor=HostSystem-host-48746,beta.kubernetes.io/os=linux
        Size: 86 GiB
        MaxPoolSize:  15 TiB
        FlexPool:  false
        Status: Online
        Has metadata:  No
        Drives:
        0: /dev/sdg, Total size 100 GiB, Online
        Cache Drives:
        No Cache drives found in this pool
Pool ID: 1
        Type:  PX-StoreV2
        UUID:  95101824-5944-4cc8-94a5-5467c06e808f
        IO Priority:  HIGH
        Labels:  medium=STORAGE_MEDIUM_SSD,topology.portworx.io/hypervisor=HostSystem-host-48746,beta.kubernetes.io/os=linux,kubernetes.io/os=linux,topology.portworx.io/node=422dad16-4821-92dc-81fc-e1a6f8021d22,kubernetes.io/hostname=lsundararajan-47-1,beta.kubernetes.io/arch=amd64,kubernetes.io/arch=amd64,iopriority=HIGH,topology.portworx.io/datacenter=CNBU
        Size: 86 GiB
        MaxPoolSize:  15 TiB
        FlexPool:  false
        Status: Online
        Has metadata:  No
        Drives:
        0: /dev/sdh, Total size 50 GiB, Online
        1: /dev/sdi, Total size 50 GiB, Online
        Cache Drives:
        No Cache drives found in this pool
Pool ID: 2
        Type:  PX-StoreV2
        UUID:  17047f27-a400-451d-bc0c-103f35b96a60
        IO Priority:  HIGH
        Labels:  kubernetes.io/os=linux,kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,topology.portworx.io/hypervisor=HostSystem-host-48746,topology.portworx.io/node=422dad16-4821-92dc-81fc-e1a6f8021d22,topology.portworx.io/datacenter=CNBU,iopriority=HIGH,kubernetes.io/hostname=lsundararajan-47-1,medium=STORAGE_MEDIUM_SSD,beta.kubernetes.io/arch=amd64
        Size: 23 GiB
        MaxPoolSize:  45 TiB
        FlexPool:  false
        Status: Online
        Has metadata:  No
        Drives:
        0: /dev/sdj, Total size 10 GiB, Online
        1: /dev/sdl, Total size 10 GiB, Online
        2: /dev/sdk, Total size 10 GiB, Online
        Cache Drives:
        No Cache drives found in this pool
Journal Device:
        1: /dev/sdf1, STORAGE_MEDIUM_SSD
Metadata Device:
        1: /dev/sdf2, STORAGE_MEDIUM_SSD
root@lsundararajan-47-1:~#

Signed-off-by: Lakshmi Narasimhan Sundararajan <[email protected]>
@sulakshm sulakshm changed the title WIP add volume spec to drive resource PWX-35127: add volume spec to drive resource Feb 21, 2024
Lakshmi Narasimhan Sundararajan added 2 commits February 22, 2024 11:39
@sulakshm sulakshm merged commit e0039ba into libopenstorage:master Feb 23, 2024
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants