-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
✨ Support DockerCluster
without load balancer
#11413
base: main
Are you sure you want to change the base?
✨ Support DockerCluster
without load balancer
#11413
Conversation
@phoban01: The label(s) In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Hi @phoban01. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/area provider/infrastructure-docker |
DockerCluster
without load balancerDockerCluster
without load balancer
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure if it is the best to add this to the docker provider without even having e2e coverage then to support this.
If we want to have this, I guess there will be no guarantees that other changes are breaking it again / only on a best effort base.
But I'd like to hear other maintainers too if we want to merge this.
CAPD is written for the purpose of having a reference implementation to test CAPI and this enlarges its scope to support other cases (guessing this is for testing kamaji at the end, which might be okay).
Please keep in mind that CAPD is only for testing.
@@ -149,6 +152,13 @@ func patchDockerCluster(ctx context.Context, patchHelper *patch.Helper, dockerCl | |||
} | |||
|
|||
func (r *DockerClusterReconciler) reconcileNormal(ctx context.Context, dockerCluster *infrav1.DockerCluster, externalLoadBalancer *docker.LoadBalancer) error { | |||
if dockerCluster.Spec.LoadBalancer.Disable { | |||
// Mark the dockerCluster ready | |||
dockerCluster.Status.Ready = true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So not using a loadbalancer leads to never setting .status.ready to true?
That would break with the CAPI contract of infra clusters!
// Disable allows skipping the creation of the cluster load balancer. | ||
Disable bool `json:"disable,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not completely happy with the wording ("disable"), should it be disabled
? because it does not describe an action.
Also please use the godoc to document when this should get used and maybe also a disclamer similar to how we did for the CustomHAProxyConfigTemplateRef.
} | ||
if err := externalLoadBalancer.UpdateConfiguration(ctx, unsafeLoadBalancerConfigTemplate); err != nil { | ||
return ctrl.Result{}, errors.Wrap(err, "failed to update DockerCluster.loadbalancer configuration") | ||
if !dockerCluster.Spec.LoadBalancer.Disable { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should not be needed. control plane machines without the loadbalancer should not be supported IMHO.
} | ||
if err := externalLoadBalancer.UpdateConfiguration(ctx, unsafeLoadBalancerConfigTemplate); err != nil { | ||
return errors.Wrap(err, "failed to update DockerCluster.loadbalancer configuration") | ||
if !dockerCluster.Spec.LoadBalancer.Disable { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here, control plane machines with lb disabled should not be supported.
FYI I'm thinking about refactoring CAPD such that it has the notion of backend, where a backend could be docker, the inmemory provider, eventually kubemark (at this moment I have chatted about this idea with few contributors, but I did but I did not had time to focus on it/bring it up in the community meeting yet 😅 ). Given that this PR and the idea above are going to touch the same part of the code base, if you are willing to collaborate on this I would prefer to extend the idea above by allowing "no backend" for the DockerCluster vs introducing ad hoc flags. |
PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
What this PR does / why we need it:
In certain scenarios it may be desirable to deploy CAPD machines without a load-balancer.
For example, in the case of a hosted control plane such as a Kamaji the load-balancer will be fulfilled by the control plane provider (indirectly at least).
This change introduces a new field to the
DockerCluster
API that whentrue
will skip the steps involved in creating and managing the load balancer container.This change is opt-in and has no impact otherwise.
/area provider/infrastructure-docker