-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support scaling the entire database #1051
Conversation
// Indicates if a pod belongs to the main cluster. It is mainly use as a selector in | ||
// VAS to filter out sandbox pods. | ||
IsSandboxLabel = "vertica.com/is-sandbox" | ||
IsSandboxFalse = "false" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this label only used by VAAS? If we do sandboxing for a subcluster, should we change this flag to true?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are correct, we need to change this to true for sandboxes.
@@ -0,0 +1,5 @@ | |||
kind: Fixed | |||
body: Routing traffic to a sandbox pod after restart |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is for your previous PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. I forgot to add a changie entry.
@@ -78,7 +78,7 @@ func (s *ScaledownReconciler) Reconcile(ctx context.Context, req *ctrl.Request) | |||
s.Log.Info("Metric's value is lower than the scale-down threshold.", "metric", mStatus.name) | |||
newMinReplicas = *s.Vas.Spec.CustomAutoscaler.MinReplicas | |||
} else { | |||
newMinReplicas = s.Vas.Spec.TargetSize | |||
newMinReplicas = s.Vas.Status.CurrentSize |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's this change for?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
currentSize represents the number of existing pods/replicas, targetSize is the desired state. In this case we want minReplicas to be equal to the current number of replicas so we do not trigger scale down. currentSize is more accurate.
// We will prevent removing a primary if it will lead to a kasety | ||
// rule violation. | ||
if primaryCountAfterScaling < minHosts { | ||
continue |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Log a message here
apiVersion: v1 | ||
kind: Service | ||
metadata: | ||
name: v-autoscale-db-by-subcluster-as |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we are using the template, all the new subclusters will be put behind one service?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only if you set the serviceName in the template. If you don't each subcluster will have its own service whose name is the SC name.
apiVersion: kuttl.dev/v1beta1 | ||
kind: TestStep | ||
commands: | ||
- script: kubectl scale -n $NAMESPACE verticaautoscaler/v-autoscale-db --replicas 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since ksafety is not 0, the primary subcluster will not be removed even we set "--replicas 0", right? If we don't violate ksafety check, will scale-down remove the existing subclusters? For example, we have pri1 and sec1 which are created manually by the customers and we set "--replicas 0", a. if ksafety is 0, will both subclusters be removed? b. if ksafety is not 0, will sec1 be removed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First of all, kubectl scale
does manual scaling, it is just for testing, to trigger the scaling logic.
That being said, if target size is lower than currentsize, then yes, we will remove sec1 (providing -1*(target-curentSize) is <= sec1 size).
scaling to zero will not be possible because it will violates ksafety and minReplicas min value is 1.
The goal of autoscaling is to only keep resources that we need.
So far, we have been scaling based on a service name. Now we can consider the entire database for scaling.
Now if serviceName is empty all the pods will be selected, and when scaling is needed all the subclusters will be considered.