Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Future API group conflict with upstream CAPI #1780

Open
schlakob opened this issue Apr 23, 2024 · 5 comments
Open

Future API group conflict with upstream CAPI #1780

schlakob opened this issue Apr 23, 2024 · 5 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale.

Comments

@schlakob
Copy link

Hi,

we are currently using the machine-controller as an addon of Kubeone, in its default configuration.

I was wondering why the CRDs of CAPI (machinedeployments, machinesets, machines) are of the cluster.k8s.io api group and not of the upstream CAPI cluster.x-k8s.io api group? I noticed, that CAPI switched from *k8s.io to *x-k8s.io a while ago.

Are there plans using the same api group as upstream CAPI in future?

For us this is important, because we are thinking of deploying a CAPI in the Kubeone cluster running the machine-controller, therefore we would like to ensure, that no conflicts regarding the CRDs occur.

@kron4eg
Copy link
Member

kron4eg commented Apr 23, 2024

Hello. Because up until now we preferred to stay on older CAPI (v1alpha1) version. The rumors say maybe we want to update, but that's not currently on the table.

@schlakob
Copy link
Author

This would then result in conflicts when running a CAPI and machine-controller in the same cluster. Is or will there be a way to overwrite the group for the machine-controller.

@embik
Copy link
Member

embik commented Jun 27, 2024

If the Kubermatic stack ever upgrades to newer CAPI versions and adopts newer API groups, it would probably integrate into CAPI as a set of providers. I don't think this would really create a conflict since CAPI is pluggable by design.

@kubermatic-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.
After a furter 30 days, they will turn rotten.
Mark the issue as fresh with /remove-lifecycle stale.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@kubermatic-bot kubermatic-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 25, 2024
@kubermatic-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

/lifecycle rotten

@kubermatic-bot kubermatic-bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale.
Projects
None yet
Development

No branches or pull requests

4 participants