-
Notifications
You must be signed in to change notification settings - Fork 108
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NFS storage concept #842
NFS storage concept #842
Conversation
Skipping CI for Draft Pull Request. |
Suggestions to NFS concept
i quickly tested gardener's reconciliation behavior by scheduling an azure storage account in a VNet. and the standard reconciliation of clusters works without any errors.
this creates a pre-requisite condition for us that, we'd need to clean up the storage account and the private endpoint in the shoot network, before we delete the cluster. if we agree to this, then we can merge this PR. |
Yes - we can assume that our controllers will listen to the GardenerCluster resource and we will delete all the resources created by our controllers. It is mentioned already in the document:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
|
||
**Cons:** | ||
- if clusters are deleted and created with different shoot name we can face [issues](https://cloud.google.com/filestore/docs/create-instance-issues#system_limit_for_internal_resources_has_been_reached_error_when_creating_an_instance) with private network quota for Filestore (it is not a common usage pattern) | ||
- storage instance can block cluster deprovisioning (not a big deal as storage controller in the KCP should clean it up anyway) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- storage instance can block cluster deprovisioning (not a big deal as storage controller in the KCP should clean it up anyway) | |
- storage instances and private endpoints in the shoot subnet can block cluster de-provisioning (not a big deal as storage controller in the KCP should clean them up anyway) |
Description
Changes proposed in this pull request: