You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In order to support #1211, we need to adapt our current module deletion logic in the declarative library to wait until all managed resources get deleted before the deletion of the other module resources as well as the Manifest.
We should rely on the newly introduced associatedResources field in the Manifest spec, if the manifest is marked for deletion, we list any resources defined in the associatedResources in SKR cluster and wait for their deletion first before proceeding.
Reasons
To support the new blocking deletion mode and the unmanaged modules
Acceptance Criteria
Implement changes required in the declarative reconciler
If the module is marked for deletion, then wait for the associatedResource deletion before performing the deletion of resources and Manifest. I.e., for all CRDs listed in associatedResources , we check that no CRs are existing for those CRDs. The CRDs themselves may still exist.
As long as CRs are still existing, then Manifest CR should be in Deleting State
Implement E2E tests
Feature Testing
End-to-End tests, Unit tests, Integration tests
Testing approach
E2E Test:
Given SKR Cluster
When Kyma Module is enabled on SKR Kyma CR with Managed set to true
Then SKR Kyma CR is in Ready State
And Manifest CR is in Ready State
When Kyma Module is disabled from Kyma CR
Then all modules resources from the synced field in Manifest are in SKR Cluster
And Manifest CR is in a "Deleting" State
When associatedResources are manually deleted from SKR Cluster
Then all other module resources are removed from SKR Cluster
And Manifest is removed from KCP Cluster
And Module is no longer in Kyma CR Status
Attachments
No response
The text was updated successfully, but these errors were encountered:
We should rely on the newly introduced managedResources field in the Manifest spec, if the manifest is marked for deletion, we list any resources defined in the managedResources in SKR cluster and wait for their deletion first before proceeding.
@kyma-project/jellyfish Sorry to open this up again... but do we have any written statement that the managedResources over all modules are mutually exclusive? Going through kyma-project/community#870 and https://raw.githubusercontent.com/kyma-project/community-modules/main/model.json didn't make this obvious to me. Just wanting to be sure since if they are not mutually exclusive, this would not be considered in the blocking deletion implementation.
Decision: we take it as a given that the list is mutually exclusive.
Description
In order to support #1211, we need to adapt our current module deletion logic in the declarative library to wait until all managed resources get deleted before the deletion of the other module resources as well as the Manifest.
We should rely on the newly introduced associatedResources field in the Manifest spec, if the manifest is marked for deletion, we list any resources defined in the associatedResources in SKR cluster and wait for their deletion first before proceeding.
Reasons
To support the new blocking deletion mode and the unmanaged modules
Acceptance Criteria
associatedResources
, we check that no CRs are existing for those CRDs. The CRDs themselves may still exist.Deleting
StateFeature Testing
End-to-End tests, Unit tests, Integration tests
Testing approach
E2E Test:
Given SKR Cluster
When Kyma Module is enabled on SKR Kyma CR with Managed set to true
Then SKR Kyma CR is in Ready State
And Manifest CR is in Ready State
When Kyma Module is disabled from Kyma CR
Then all modules resources from the synced field in Manifest are in SKR Cluster
And Manifest CR is in a "Deleting" State
When associatedResources are manually deleted from SKR Cluster
Then all other module resources are removed from SKR Cluster
And Manifest is removed from KCP Cluster
And Module is no longer in Kyma CR Status
Attachments
No response
The text was updated successfully, but these errors were encountered: