Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add support for draining single nodes #843

Closed

Conversation

SchSeba
Copy link
Collaborator

@SchSeba SchSeba commented Feb 10, 2025

With this change if a single node needs to drain to make configure changes it will remove pods using SR-IOV devices, but in case if a reboot is needed we just go and reboot.

This is an improvements from the skipDrain option we have in the sriovOperatorConfig where in case of only draining is needed to make the configuration change running pods using sriov will lost the VFs in runtime...

With this change if a single node needs to drain to make configure
changes it will remove pods using SR-IOV devices, but in case if a
reboot is needed we just go and reboot.

This is an improvements from the skipDrain option we have in the
sriovOperatorConfig where in case of only draining is needed to make the
configuration change running pods using sriov will lost the VFs in
runtime...

Signed-off-by: Sebastian Sch <[email protected]>
Copy link

Thanks for your PR,
To run vendors CIs, Maintainers can use one of:

  • /test-all: To run all tests for all vendors.
  • /test-e2e-all: To run all E2E tests for all vendors.
  • /test-e2e-nvidia-all: To run all E2E tests for NVIDIA vendor.

To skip the vendors CIs, Maintainers can use one of:

  • /skip-all: To skip all tests for all vendors.
  • /skip-e2e-all: To skip all E2E tests for all vendors.
  • /skip-e2e-nvidia-all: To skip all E2E tests for NVIDIA vendor.
    Best regards.

@coveralls
Copy link

Pull Request Test Coverage Report for Build 13242265777

Details

  • 12 of 23 (52.17%) changed or added relevant lines in 1 file are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage increased (+0.08%) to 47.374%

Changes Missing Coverage Covered Lines Changed/Added Lines %
controllers/drain_controller_helper.go 12 23 52.17%
Totals Coverage Status
Change from base Build 13109399258: 0.08%
Covered Lines: 7270
Relevant Lines: 15346

💛 - Coveralls

@@ -22,6 +22,13 @@ type SriovNetworkPoolConfigSpec struct {
// even if maxUnavailable is greater than one.
MaxUnavailable *intstr.IntOrString `json:"maxUnavailable,omitempty"`

// SkipDrainOnReboot defines if the operator should make a full no drain before reboot
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does "full no drain" mean?

@@ -172,6 +178,7 @@ func (dr *DrainReconcile) tryDrainNode(ctx context.Context, node *corev1.Node) (
}
reqLogger.Info("Max node allowed to be draining at the same time", "MaxParallelNodeConfiguration", maxUnv)
reqLogger.Info("Count of draining", "drainingNodes", current)
reqLogger.Info("Check Skip Drain on boot config", "SkipDrainOnReboot", nodePool.Spec.SkipDrainOnReboot)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

skip drain on reboot ?

// This is useful for single node deployments instead of using the skipDrain from the operatorConfig.
// This will allow the operator to drain pods using sriov devices when drain is needed on single node
// instead of just removing the VF's from running pods that is the case if skipDrain is used.
SkipDrainOnReboot bool `json:"skipDrainOnReboot,omitempty"`
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

to consider:

implicitly skip drain if reboot requested in case of a single node cluster.

@SchSeba
Copy link
Collaborator Author

SchSeba commented Feb 19, 2025

deprecate in favor of #850

@SchSeba SchSeba closed this Feb 19, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants