Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature - Make credentials definable per Persistent Volume #136

Open
bartdenotte opened this issue Jan 29, 2024 · 9 comments
Open

Feature - Make credentials definable per Persistent Volume #136

bartdenotte opened this issue Jan 29, 2024 · 9 comments
Labels
enhancement New feature or request

Comments

@bartdenotte
Copy link

bartdenotte commented Jan 29, 2024

/feature

Is your feature request related to a problem? Please describe.
The mountpoint CSI driver handles all mountpoint s3 PV's within a EKS cluster.
When data is segmented due to security concerns using different buckets for multi tenancy or cross account buckets (e.g. BYOB), the current setup requires the CSI driver to be able to access all locations. This requires the IAM policy to global access to all resources and allows the CSI to operate with more access than required (cfr. Least Privilege principle) when handling specific PV requests.

Describe the solution you'd like in detail
Allow credential definition (including temp AWS credentials) per PV or reference to namespace/secret per PV.
This allows to define a IAM policy per PV to minimise the authorisations aligned with the Least privilege principle.
These credentials will be read and propagated to the mount-s3 command.

Describe alternatives you've considered
None

Additional context
I did a small POC where I modified the CSI driver.

  • Added additional attributes (awsAccesKeyId, awsSecretAccessKey, awsSessionToken) in spec.csi.volumeAttributes on the PV definition
  • Read those additional attributes in pkg/driver/node.go and propagate those the mount function (cfr. d.Mounter.Mount) defined in pkg/driver/mount.go
  • When defined use these creds and add those to the env []string. Due to order of credential usage by the AWS client, the creds from the PV take precedence above the IRSA role defined on the csi DaemonSet via the Service account or on the Env vars defined on the CSI DaemonSet
@jjkr
Copy link
Contributor

jjkr commented Jan 30, 2024

Thank you for the request. Finer grained access control is something we are looking at, and we have another similar feature request open here: #111. We will share any updates here as we have them.

@psavva
Copy link

psavva commented Feb 29, 2024

I would like to make a suggestion...

What if you have multiple daemonsets defined, once per bucket.
ie: instead of hardcoding the secret to aws-secret, use aws-secret- or similar, and deploy multiple daemonsets, one for each bucket.

I think this may work, however, I have not tested it. @jjkr option on this option or workaround ?

@jjkr
Copy link
Contributor

jjkr commented Apr 15, 2024

Currently we just use the global secret name to pull the short lived credentials. Code changes would be needed to define a different secret name for each mount. I'm not seeing a potential workaround with multiple daemonsets, but please let us know if you have something viable there.

We hope to implement this feature soon, but don't have specific timelines to share. Please give the issue a thumbs up if it is important to you- we do take that into account for prioritization.

@psavva
Copy link

psavva commented Apr 15, 2024

Secrets are bound per namespace. Perhaps a workaround would be one bucket per namespace, not practical, but I think it could work...

@tonedefdev
Copy link

tonedefdev commented May 31, 2024

This is a much needed feature for centralized eks deployments to mount decentralized s3 buckets as pod volumes. Is there any work being done on this side yet?

@equusit
Copy link

equusit commented Jun 24, 2024

This is a much needed feature for centralized eks deployments to mount decentralized s3 buckets as pod volumes. Is there any work being done on this side yet?

The same here - we manage many apps within the same clusters, with many s3 buckets and having to grant permissions to all the buckets to a central role rather than using the applications own app-role is a hard sell

@markleary
Copy link

This would be very useful to us for use with on-prem non-aws s3 implementations. @bartdenotte any chance you could make a PR based on the POC you already made?

@muddyfish
Copy link
Contributor

So if I understand the use case here, as compared to pod-level identity in #111, you want to be able to grant volumes permissions rather than pods because it's more granular.

Is it also important that the credentials are defined by cluster administrators as compared to the pod level approach?

Am I missing any other use cases here?

@unexge
Copy link
Contributor

unexge commented Oct 24, 2024

Mountpoint CSI Driver v1.9.0 added support for Pod-level credentials.

Please let us know if there are use-cases that isn't covered with Pod-level credentials and still requires Volume-level credentials.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

8 participants