Skip to content
This repository has been archived by the owner on Mar 4, 2024. It is now read-only.

EVEREST-838 Workaround waiting for DBs to be deleted #311

Merged
merged 4 commits into from
Feb 21, 2024

Conversation

recharte
Copy link
Collaborator

Workaround waiting for DBs to be deleted

Problem:
EVEREST-838

When deleting a DBC CR, the everest operator doesn't wait for the DB operator's CRs to be deleted. Thus, as soon as we delete the DBC CRs, these cease to exist in the cluster and the polling below will return immediately. If we don't wait for the DB operators to process the deletion of the CRs, we may end up deleting the namespaces before the DB operators have a chance to delete the resources they manage, leaving the namespaces in an endless Terminating state waiting for finalizers to be removed.
The everest operator should have a Deleting status that waits for the DB operators to delete their DB CRs before removing the corresponting DBC CR. Until this is implemented, we work around this by sleeping for two minutes to give the DB operators a chance to delete the resources they manage before we delete the namespaces.

CHECKLIST

Jira

  • Is the Jira ticket created and referenced properly?

Tests

  • [ ] Is an Integration test/test case added for the new feature/change?
  • [ ] Are unit tests added where appropriate?

When deleting a DBC CR, the everest operator doesn't wait for the DB
operator's CRs to be deleted. Thus, as soon as we delete the DBC CRs,
these cease to exist in the cluster and the polling below will return
immediately. If we don't wait for the DB operators to process the
deletion of the CRs, we may end up deleting the namespaces before the DB
operators have a chance to delete the resources they manage, leaving the
namespaces in an endless Terminating state waiting for finalizers to be
removed.
The everest operator should have a Deleting status that waits for the DB
operators to delete their DB CRs before removing the corresponting DBC
CR. Until this is implemented, we work around this by sleeping for two
minutes to give the DB operators a chance to delete the resources they
manage before we delete the namespaces.
With the recent changes that introduced the install of the monitoring
stack by default, the install command now takes longer so we shall
increase the test timeout to avoid intermittent CI failures.
@recharte recharte marked this pull request as ready for review February 20, 2024 21:43
@recharte recharte requested a review from a user February 20, 2024 21:43
@recharte recharte enabled auto-merge (squash) February 20, 2024 21:44
@recharte recharte merged commit 6aba595 into main Feb 21, 2024
10 checks passed
@recharte recharte deleted the EVEREST-838-fix-uninstall-command branch February 21, 2024 08:12
recharte added a commit that referenced this pull request Feb 21, 2024
* EVEREST-838 workaround waiting for DBs to be deleted

When deleting a DBC CR, the everest operator doesn't wait for the DB
operator's CRs to be deleted. Thus, as soon as we delete the DBC CRs,
these cease to exist in the cluster and the polling below will return
immediately. If we don't wait for the DB operators to process the
deletion of the CRs, we may end up deleting the namespaces before the DB
operators have a chance to delete the resources they manage, leaving the
namespaces in an endless Terminating state waiting for finalizers to be
removed.
The everest operator should have a Deleting status that waits for the DB
operators to delete their DB CRs before removing the corresponting DBC
CR. Until this is implemented, we work around this by sleeping for two
minutes to give the DB operators a chance to delete the resources they
manage before we delete the namespaces.

* EVEREST-838 Update everest-operator go mod

* EVEREST-838 increase tests timeout to 10 min

With the recent changes that introduced the install of the monitoring
stack by default, the install command now takes longer so we shall
increase the test timeout to avoid intermittent CI failures.

* EVEREST-838 fix cli-tests with new uninstall command
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant