Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clarify the Need for Admin Credentials in Ceph CSI CephFS #4962

Closed
emreberber opened this issue Nov 18, 2024 · 5 comments
Closed

Clarify the Need for Admin Credentials in Ceph CSI CephFS #4962

emreberber opened this issue Nov 18, 2024 · 5 comments

Comments

@emreberber
Copy link

emreberber commented Nov 18, 2024

The documentation states that admin credentials are required for provisioning new volumes in Ceph CSI CephFS, specifically mentioning the need for:

  • adminID: ID of an admin client
  • adminKey: Key of the admin client

However, the reason behind needing admin credentials is not explicitly stated. We need clarification on why a normal user cannot perform the same provisioning operations. Specifically, we need to understand:

1. What specific tasks require admin permissions?

Is it related to creating subvolumes or managing metadata in CephFS?

2. What limitations exist for non-admin users?
Are there specific permissions that a normal user lacks which prevent dynamic provisioning?

This clarification will help us understand the security and operational implications of using admin credentials and whether there's a workaround or alternative setup for non-admin users.

Please provide details or examples to illustrate why admin credentials are mandatory.

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Nov 19, 2024

adminID and adminKey are they the names it need not to be the admin user. i have opened #4935 to remove it and use userId and userKey . https://github.com/ceph/ceph-csi/blob/devel/docs/capabilities.md are the required caps for cephfs.

@emreberber
Copy link
Author

Thanks for your comment 🙏🏻

We created the account according to the instructions in this document, but we are getting the following error:

Warning  ProvisioningFailed    6s (x5 over 14s)  cephfs.csi.ceph.com_ceph-csi-cephfs-provisioner-x  failed to provision volume with StorageClass "csi-cephfs-sc": rpc error: code = Internal desc = rados: ret=-1, Operation not permitted
Ceph Version Nautilus + 1.26 K8s + 3.11.0 Ceph CSI

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Nov 19, 2024

@emreberber have you specified right filesystem and csi group name as per https://github.com/ceph/ceph-csi/blob/devel/docs/capabilities.md#create-user-for-cephfs, can you please paste the ceph auth output of the user and the storageclass you are using?

@emreberber
Copy link
Author

emreberber commented Nov 19, 2024

ceph auth get-or-create client.csi-cephfs \
mgr "allow rw" \
osd "allow rw tag cephfs metadata=cephfs, allow rw tag cephfs data=cephfs" \
mds "allow r path=/volumes, allow rws path=/volumes/csi-test" \
mon "allow r"

StorageClass

allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    meta.helm.sh/release-name: ceph-csi-cephfs
    meta.helm.sh/release-namespace: csi-system
  labels:
    app: ceph-csi-cephfs
    app.kubernetes.io/managed-by: Helm
    heritage: Helm
    release: ceph-csi-cephfs
  name: csi-cephfs-sc
parameters:
  clusterID: 61
  csi.storage.k8s.io/controller-expand-secret-name: csi-cephfs-secret
  csi.storage.k8s.io/controller-expand-secret-namespace: csi-system
  csi.storage.k8s.io/node-stage-secret-name: csi-cephfs-secret
  csi.storage.k8s.io/node-stage-secret-namespace: csi-system
  csi.storage.k8s.io/provisioner-secret-name: csi-cephfs-secret
  csi.storage.k8s.io/provisioner-secret-namespace: csi-system
  fsName: cephfs
provisioner: cephfs.csi.ceph.com
reclaimPolicy: Delete
volumeBindingMode: Immediate

@emreberber
Copy link
Author

emreberber commented Nov 20, 2024

The issue was caused by the metadata of CephFS. We identified and resolved it as follows. I'll document the commands here in case the problem recurs for someone else.

# ceph osd pool ls detail --format=json | jq '.[] | select(.pool_name| startswith("cephfs")) | .pool_name, .application_metadata'
"cephfs_data"
{
  "cephfs": {}
}
"cephfs_metadata"
{
  "cephfs": {}
}

# ceph osd pool application set cephfs_data cephfs data cephfs
set application 'cephfs' key 'data' to 'cephfs' on pool 'cephfs_data'

# ceph osd pool application set cephfs_metadata cephfs metadata cephfs
set application 'cephfs' key 'metadata' to 'cephfs' on pool 'cephfs_metadata'

# ceph osd pool ls detail --format=json | jq '.[] | select(.pool_name| startswith("cephfs")) | .pool_name, .application_metadata'
"cephfs_data"
{
  "cephfs": {
    "data": "cephfs"
  }
}
"cephfs_metadata"
{
  "cephfs": {
    "metadata": "cephfs"
  }
}

Thank you for your assistance with this! @Madhu-1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants