Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prepare a plan for migration from current GardenerCluster CR to new Kubeconfig CRs #187

Closed
2 tasks
Tracked by #112 ...
Disper opened this issue Apr 22, 2024 · 4 comments
Closed
2 tasks
Tracked by #112 ...
Assignees

Comments

@Disper
Copy link
Member

Disper commented Apr 22, 2024

Description
Right now, GardenerCluster CR is a single Custom Resource used for Kubeconfig management.
The current idea of the new architecture uses two Custom resources:

  • One responsible for Kubeconfig management (current GardenerCluster CR but with probably a better name)
  • One responsible for containing data used for provisioning a cluster (WiP name: ProvisioningRequest CR)

This task is about figuring out the best way to proceed.

Questions to consider:

  • should we name it as v2? or v2alpha
  • how kubebuilder supports the migration between different versions

Atachments

@Disper
Copy link
Member Author

Disper commented Apr 22, 2024

Idea 1 - GardenerCluster CR v1alpha1 responsible for Kubeconfig Management and v1beta1 responsible for provisioning

image

Pros

  • Naming would be quickly improved (The current name of GardenerCluster is confusing)

Cons

  • Since custom resource objects need the ability to be served at both versions, that means they will sometimes be served in a different version than the one stored. To make this possible, the custom resource objects must sometimes be converted between the version they are stored at and the version they are served at. If the conversion involves schema changes and requires custom logic, a conversion webhook should be used.

    • I don't know how it would be possible to assure conversion in webhook from v1alpha1 responsible for Kubeconfig Management and v1beta1 responsible for provisioning (back and forth). But if it were possible it could become a major benefit.
  • The current GardenerCluster CR is responsible for kubeconfig management. Having it as v1alpha1 and having a totally different set functionalities (provisioning without kubeconfig management) in v2alpha1/v1alpha2 does not make sense
  • Implementing changes in existing GardenerClusterController would be difficult.

@Disper
Copy link
Member Author

Disper commented Apr 23, 2024

Idea 2 - New GardenerClusterProvisioningRequest CR

Draft of a workplan:

  1. Create GardenerClusterProvisioningRequest CR and implement provisioning functionalities in the corresponding controller.
  2. KEB, instead of calling Provisioner, switches to the creation of GardenerClusterProvisioningRequest CR. GardenerCluster CR creation can still stay temporarily in KEB, see the plan below.

And the optional movement of GardenerCluster CR creation to KIM

  1. Create GardenerClusterKubeconfig CR and corresponding controller based on the existing GardenerCluster CR. This duplication is an idea for ensuring a smooth migration. Until CRs are migrated, two controllers would be assuring that kubeconfigs are rotated (either new GardenerClusterKubeconfig or old GardenerCluster should exist).
  2. Move GardenerClusterKubeconfig creation to KIM
    1. KEB no longer creates GardenerCluster CRs
      a. Provisioning CR ready status means that both cluster is created and kubeconfig secret is created
    2. KIM starts to create GardenerClusterKubeconfig CRs
    3. Both steps should be done in a single PR to management-plane-config (switching different feature flags in both KIM and KEB)
  3. Old GardenerCluster CRs clean-up
    1. GardenerClusterController should no longer react on GardenerCluster CR removals (to avoid secret deletion during migration)
    2. Run a simple migration bash script that will create GardenerClusterKubeconfig CRs based on the GardenerCluster CRs and delete old ones.
    3. Delete GardenerClusterController and GardenerCluster CRD
    4. Clean up orphaned secrets (if any) that could be left if some cluster was deleted after secrets removal was turned off inGardenerClusterController and before migration was run. There is already an internal diagnostic tool that should make that easy.

Pros

  • Renaming of GardenerCluster CR to a more suitable name is optional and can be done later (when moving this CR creation logic to KIM). That means that we can fully implement provisioning functionalities while working on a single CR.

Cons

  • While renaming is optional, the name might be confusing until changed but we're close to our stakeholders to explain the difference.
  • While both GardenerClusterKubeconfig and GardenerCluster would exist, any urgent bug-fix would require changes in two places (rather unlikely, as the solution was stable for few months now)

@Disper Disper self-assigned this Apr 23, 2024
@Disper Disper changed the title Prepare a plan for migration from current GardenerCluster CR to new CRs Prepare a plan for migration from current GardenerCluster CR to new Kubeconfig CRs Apr 24, 2024
@akgalwas
Copy link
Contributor

akgalwas commented Apr 24, 2024

Idea 3 - Combination of ideas 1, and 2

Draft of a workplan:

  1. Prepare GardenerCluster v2 CR and implement provisioning functionalities in the corresponding controller.
  2. KEB, instead of calling Provisioner, switches to the creation of GardenerCluster v2 CR. GardenerCluster CR creation can still stay temporarily in KEB, see the plan below.

And the optional movement of GardenerCluster CR creation to KIM

  1. Create GardenerClusterKubeconfig CR and corresponding controller based on the existing GardenerCluster CR. This duplication is an idea for ensuring a smooth migration. Until CRs are migrated, two controllers would be assuring that kubeconfigs are rotated (either new GardenerClusterKubeconfig or old GardenerCluster should exist).
  2. Move GardenerClusterKubeconfig creation to KIM
    1. KEB no longer creates GardenerCluster v1 CRs
    2. KIM starts to create GardenerClusterKubeconfig CRs
    3. Both steps should be done in a single PR to management-plane-config (switching different feature flags in both KIM and KEB)
  3. Old GardenerCluster CRs clean-up
    1. GardenerClusterController should no longer react on GardenerCluster v1 CR removals (to avoid secret deletion during migration)
    2. Run a simple migration bash script that will create GardenerClusterKubeconfig CRs based on the GardenerCluster v1 CRs and delete old ones.
    3. Delete GardenerClusterController and GardenerCluster v1 CRD
    4. Clean up orphaned secrets (if any) that could be left if some cluster was deleted after secrets removal was turned off inGardenerClusterController and before migration was run. There is already an internal diagnostic tool that should make that easy.

Pros

  • Renaming of GardenerCluster CR is not needed

Cons

  • While both GardenerClusterKubeconfig and GardenerCluster v1 would exist, any urgent bug-fix would require changes in two places (rather unlikely, as the solution was stable for few months now)
  • There still could be some confusion, as we will at some point have two versions of GardenerCluster resource.

@Disper
Copy link
Member Author

Disper commented May 2, 2024

Closing it as we have some ideas prepared. Idea 2 seems most plausible at the moment but it might change as we learn more during the implementation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants