-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature - Make credentials definable per Persistent Volume #136
Comments
Thank you for the request. Finer grained access control is something we are looking at, and we have another similar feature request open here: #111. We will share any updates here as we have them. |
I would like to make a suggestion... What if you have multiple daemonsets defined, once per bucket. I think this may work, however, I have not tested it. @jjkr option on this option or workaround ? |
Currently we just use the global secret name to pull the short lived credentials. Code changes would be needed to define a different secret name for each mount. I'm not seeing a potential workaround with multiple daemonsets, but please let us know if you have something viable there. We hope to implement this feature soon, but don't have specific timelines to share. Please give the issue a thumbs up if it is important to you- we do take that into account for prioritization. |
Secrets are bound per namespace. Perhaps a workaround would be one bucket per namespace, not practical, but I think it could work... |
This is a much needed feature for centralized eks deployments to mount decentralized s3 buckets as pod volumes. Is there any work being done on this side yet? |
The same here - we manage many apps within the same clusters, with many s3 buckets and having to grant permissions to all the buckets to a central role rather than using the applications own app-role is a hard sell |
This would be very useful to us for use with on-prem non-aws s3 implementations. @bartdenotte any chance you could make a PR based on the POC you already made? |
So if I understand the use case here, as compared to pod-level identity in #111, you want to be able to grant volumes permissions rather than pods because it's more granular. Is it also important that the credentials are defined by cluster administrators as compared to the pod level approach? Am I missing any other use cases here? |
Mountpoint CSI Driver v1.9.0 added support for Pod-level credentials. Please let us know if there are use-cases that isn't covered with Pod-level credentials and still requires Volume-level credentials. |
/feature
Is your feature request related to a problem? Please describe.
The mountpoint CSI driver handles all mountpoint s3 PV's within a EKS cluster.
When data is segmented due to security concerns using different buckets for multi tenancy or cross account buckets (e.g. BYOB), the current setup requires the CSI driver to be able to access all locations. This requires the IAM policy to global access to all resources and allows the CSI to operate with more access than required (cfr. Least Privilege principle) when handling specific PV requests.
Describe the solution you'd like in detail
Allow credential definition (including temp AWS credentials) per PV or reference to namespace/secret per PV.
This allows to define a IAM policy per PV to minimise the authorisations aligned with the Least privilege principle.
These credentials will be read and propagated to the mount-s3 command.
Describe alternatives you've considered
None
Additional context
I did a small POC where I modified the CSI driver.
The text was updated successfully, but these errors were encountered: