Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(helm): update rook-ceph group ( v1.14.10 → v1.15.0 ) (minor) #201

Merged
merged 1 commit into from
Sep 3, 2024

Conversation

renovate[bot]
Copy link
Contributor

@renovate renovate bot commented Aug 22, 2024

This PR contains the following updates:

Package Update Change
rook-ceph minor v1.14.10 -> v1.15.0
rook-ceph-cluster minor v1.14.10 -> v1.15.0

Release Notes

rook/rook (rook-ceph)

v1.15.0

Compare Source

Upgrade Guide

To upgrade from previous versions of Rook, see the Rook upgrade guide.

Breaking Changes
  • Minimum version of Kubernetes supported is increased to K8s v1.26.
  • During CephBlockPool updates, Rook will now return an error if an invalid device class is specified. Pools with invalid device classes may start failing until the correct device class is specified. For more details, see #​14057.
  • Rook has deprecated CSI network "holder" pods. If there are pods named csi-*plugin-holder-* in the Rook operator namespace, see the detailed documentation to disable them. This deprecation process will be required before upgrading to the future Rook v1.16.
  • Ceph COSI driver images have been updated. This impacts existing COSI Buckets, BucketClaims, and BucketAccesses. Update existing clusters following the guide here.
  • CephObjectStore, CephObjectStoreUser, and OBC endpoint behavior has changed when CephObjectStore spec.hosting configurations are set. Use the new spec.hosting.advertiseEndpoint config to define required behavior as documented.
Features
  • Added support for Ceph Squid (v19), in addition to Reef (v18) and Quincy (v17). Quincy support will be removed in Rook v1.16.
  • Ceph-CSI driver v3.12, including new options for RBD, log rotation, and updated sidecar images.
  • Allow updating the device class of OSDs, if allowDeviceClassUpdate: true is set in the CephCluster CR.
  • Allow updating the weight of an OSD, if allowOsdCrushWeightUpdate: true is set in the CephCluster CR.
  • Use fully-qualified image names (docker.io/rook/ceph) in operator manifests and helm charts.
Experimental Features
  • CephObjectStore support for keystone authentication for S3 and Swift. See the Object store documentation to configure.
  • CSI operator: CSI settings are moving to CRs managed by a new operator. Once enabled, Rook will convert the settings previously defined in the operator configmap or env vars into the new CRs managed by the CSI operator. There are two steps to enable:

Configuration

📅 Schedule: Branch creation - "on friday and saturday" in timezone Europe/Prague, Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about these updates again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

Copy link

github-actions bot commented Aug 22, 2024

--- HelmRelease: storage/rook-ceph-operator ConfigMap: storage/rook-ceph-operator-config

+++ HelmRelease: storage/rook-ceph-operator ConfigMap: storage/rook-ceph-operator-config

@@ -25,21 +25,21 @@

   CSI_ENABLE_VOLUME_GROUP_SNAPSHOT: 'true'
   CSI_PLUGIN_PRIORITY_CLASSNAME: system-node-critical
   CSI_PROVISIONER_PRIORITY_CLASSNAME: system-cluster-critical
   CSI_RBD_FSGROUPPOLICY: File
   CSI_CEPHFS_FSGROUPPOLICY: File
   CSI_NFS_FSGROUPPOLICY: File
-  ROOK_CSI_CEPH_IMAGE: quay.io/cephcsi/cephcsi:v3.11.0
-  ROOK_CSI_REGISTRAR_IMAGE: registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.1
-  ROOK_CSI_PROVISIONER_IMAGE: registry.k8s.io/sig-storage/csi-provisioner:v4.0.1
-  ROOK_CSI_SNAPSHOTTER_IMAGE: registry.k8s.io/sig-storage/csi-snapshotter:v7.0.2
-  ROOK_CSI_ATTACHER_IMAGE: registry.k8s.io/sig-storage/csi-attacher:v4.5.1
-  ROOK_CSI_RESIZER_IMAGE: registry.k8s.io/sig-storage/csi-resizer:v1.10.1
+  ROOK_CSI_CEPH_IMAGE: quay.io/cephcsi/cephcsi:v3.12.0
+  ROOK_CSI_REGISTRAR_IMAGE: registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.11.1
+  ROOK_CSI_PROVISIONER_IMAGE: registry.k8s.io/sig-storage/csi-provisioner:v5.0.1
+  ROOK_CSI_SNAPSHOTTER_IMAGE: registry.k8s.io/sig-storage/csi-snapshotter:v8.0.1
+  ROOK_CSI_ATTACHER_IMAGE: registry.k8s.io/sig-storage/csi-attacher:v4.6.1
+  ROOK_CSI_RESIZER_IMAGE: registry.k8s.io/sig-storage/csi-resizer:v1.11.1
   ROOK_CSI_IMAGE_PULL_POLICY: IfNotPresent
   CSI_ENABLE_CSIADDONS: 'false'
-  ROOK_CSIADDONS_IMAGE: quay.io/csiaddons/k8s-sidecar:v0.8.0
+  ROOK_CSIADDONS_IMAGE: quay.io/csiaddons/k8s-sidecar:v0.9.0
   CSI_ENABLE_TOPOLOGY: 'false'
   ROOK_CSI_ENABLE_NFS: 'false'
   CSI_ENABLE_LIVENESS: 'true'
   CSI_FORCE_CEPHFS_KERNEL_CLIENT: 'true'
   CSI_GRPC_TIMEOUT_SECONDS: '150'
   CSI_PROVISIONER_REPLICAS: '2'
--- HelmRelease: storage/rook-ceph-operator ClusterRole: storage/rook-ceph-system

+++ HelmRelease: storage/rook-ceph-operator ClusterRole: storage/rook-ceph-system

@@ -39,7 +39,51 @@

 - apiGroups:
   - apiextensions.k8s.io
   resources:
   - customresourcedefinitions
   verbs:
   - get
+- apiGroups:
+  - csi.ceph.io
+  resources:
+  - cephconnections
+  verbs:
+  - create
+  - delete
+  - get
+  - list
+  - update
+  - watch
+- apiGroups:
+  - csi.ceph.io
+  resources:
+  - clientprofiles
+  verbs:
+  - create
+  - delete
+  - get
+  - list
+  - update
+  - watch
+- apiGroups:
+  - csi.ceph.io
+  resources:
+  - operatorconfigs
+  verbs:
+  - create
+  - delete
+  - get
+  - list
+  - update
+  - watch
+- apiGroups:
+  - csi.ceph.io
+  resources:
+  - drivers
+  verbs:
+  - create
+  - delete
+  - get
+  - list
+  - update
+  - watch
 
--- HelmRelease: storage/rook-ceph-operator ClusterRole: storage/cephfs-csi-nodeplugin

+++ HelmRelease: storage/rook-ceph-operator ClusterRole: storage/cephfs-csi-nodeplugin

@@ -7,7 +7,31 @@

 - apiGroups:
   - ''
   resources:
   - nodes
   verbs:
   - get
+- apiGroups:
+  - ''
+  resources:
+  - secrets
+  verbs:
+  - get
+- apiGroups:
+  - ''
+  resources:
+  - configmaps
+  verbs:
+  - get
+- apiGroups:
+  - ''
+  resources:
+  - serviceaccounts
+  verbs:
+  - get
+- apiGroups:
+  - ''
+  resources:
+  - serviceaccounts/token
+  verbs:
+  - create
 
--- HelmRelease: storage/rook-ceph-operator ClusterRole: storage/cephfs-external-provisioner-runner

+++ HelmRelease: storage/rook-ceph-operator ClusterRole: storage/cephfs-external-provisioner-runner

@@ -11,13 +11,27 @@

   verbs:
   - get
   - list
 - apiGroups:
   - ''
   resources:
+  - configmaps
+  verbs:
+  - get
+- apiGroups:
+  - ''
+  resources:
   - nodes
+  verbs:
+  - get
+  - list
+  - watch
+- apiGroups:
+  - storage.k8s.io
+  resources:
+  - csinodes
   verbs:
   - get
   - list
   - watch
 - apiGroups:
   - ''
@@ -139,7 +153,19 @@

   - groupsnapshot.storage.k8s.io
   resources:
   - volumegroupsnapshotcontents/status
   verbs:
   - update
   - patch
+- apiGroups:
+  - ''
+  resources:
+  - serviceaccounts
+  verbs:
+  - get
+- apiGroups:
+  - ''
+  resources:
+  - serviceaccounts/token
+  verbs:
+  - create
 
--- HelmRelease: storage/rook-ceph-operator ClusterRole: storage/rbd-external-provisioner-runner

+++ HelmRelease: storage/rook-ceph-operator ClusterRole: storage/rbd-external-provisioner-runner

@@ -173,15 +173,7 @@

   resources:
   - nodes
   verbs:
   - get
   - list
   - watch
-- apiGroups:
-  - storage.k8s.io
-  resources:
-  - csinodes
-  verbs:
-  - get
-  - list
-  - watch
 
--- HelmRelease: storage/rook-ceph-operator Deployment: storage/rook-ceph-operator

+++ HelmRelease: storage/rook-ceph-operator Deployment: storage/rook-ceph-operator

@@ -26,13 +26,13 @@

       - effect: NoExecute
         key: node.kubernetes.io/unreachable
         operator: Exists
         tolerationSeconds: 5
       containers:
       - name: rook-ceph-operator
-        image: rook/ceph:v1.14.10
+        image: docker.io/rook/ceph:v1.15.0
         imagePullPolicy: IfNotPresent
         args:
         - ceph
         - operator
         securityContext:
           capabilities:

Copy link

github-actions bot commented Aug 22, 2024

--- kubernetes/main/apps/storage/rook-ceph/app Kustomization: flux-system/rook-ceph HelmRelease: storage/rook-ceph-operator

+++ kubernetes/main/apps/storage/rook-ceph/app Kustomization: flux-system/rook-ceph HelmRelease: storage/rook-ceph-operator

@@ -13,13 +13,13 @@

     spec:
       chart: rook-ceph
       sourceRef:
         kind: HelmRepository
         name: rook-ceph
         namespace: flux-system
-      version: v1.14.10
+      version: v1.15.0
   dependsOn:
   - name: snapshot-controller
   install:
     remediation:
       retries: 3
   interval: 30m
--- kubernetes/main/apps/storage/rook-ceph/cluster Kustomization: flux-system/rook-ceph-cluster HelmRelease: storage/rook-ceph-cluster

+++ kubernetes/main/apps/storage/rook-ceph/cluster Kustomization: flux-system/rook-ceph-cluster HelmRelease: storage/rook-ceph-cluster

@@ -13,13 +13,13 @@

     spec:
       chart: rook-ceph-cluster
       sourceRef:
         kind: HelmRepository
         name: rook-ceph
         namespace: flux-system
-      version: v1.14.10
+      version: v1.15.0
   dependsOn:
   - name: multus
     namespace: kube-system
   - name: rook-ceph-operator
     namespace: storage
   install:

@renovate renovate bot changed the title feat(helm): update rook-ceph group ( v1.14.9 → v1.15.0 ) (minor) feat(helm): update rook-ceph group ( v1.14.10 → v1.15.0 ) (minor) Sep 3, 2024
@renovate renovate bot force-pushed the renovate/main-rook-ceph branch from 036e3a8 to 8e65811 Compare September 3, 2024 16:52
@prehor prehor merged commit 795a977 into main Sep 3, 2024
9 checks passed
@renovate renovate bot deleted the renovate/main-rook-ceph branch September 3, 2024 16:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant