Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add affinity to jobTemplate #2753

Conversation

takuyaymd
Copy link
Contributor

@takuyaymd takuyaymd commented Nov 25, 2024

Description

Add affinity, nodeSelector and torelations to jobTemplate in manager/index (corrector, creator, saver).

Related Issue

Versions

  • Vald Version: v1.7.14
  • Go Version: v1.23.3
  • Rust Version: v1.82.0
  • Docker Version: v27.3.1
  • Kubernetes Version: v1.31.2
  • Helm Version: v3.16.2
  • NGT Version: v2.3.4
  • Faiss Version: v1.9.0

Checklist

Special notes for your reviewer

Summary by CodeRabbit

  • New Features
    • Enhanced configuration options for the vald chart, including new fields for nodeSelector, tolerations, and affinity in various sections, allowing for improved pod scheduling and resource management.
    • Expanded schema definitions in the Custom Resource Definition (CRD) to include affinity, nodeSelector, and tolerations properties for better deployment flexibility.
    • Added detailed configurations for Kubernetes scheduling and affinity in the values.schema.json file, improving resource allocation management.
  • Bug Fixes
    • Refined definitions for containerPorts and servicePorts to ensure correct value references.
  • Style
    • Improved formatting and indentation for better readability in template files.

Copy link
Contributor

coderabbitai bot commented Nov 25, 2024

📝 Walkthrough
<details>
<summary>📝 Walkthrough</summary>

## Walkthrough
The changes in this pull request involve updates to the Helm chart for `vald`, specifically in the `_helpers.tpl` template file, the `values.yaml` configuration file, the `valdrelease.yaml` Custom Resource Definition (CRD), and the `values.schema.json` file. The `_helpers.tpl` file has been modified to enhance job scheduling capabilities by adding `affinity`, `nodeSelector`, and `tolerations` fields to the `vald.index_job` definition. The `values.yaml` file has been updated to include these fields in the sections for `manager.index.corrector`, `manager.index.creator`, and `manager.index.saver`, providing greater control over pod scheduling and resource allocation. The CRD and schema files have also been updated to reflect these new configuration options.

## Changes

| File Path                                   | Change Summary                                                                                          |
|---------------------------------------------|---------------------------------------------------------------------------------------------------------|
| charts/vald/templates/_helpers.tpl          | Updated `vald.index_job` to include `affinity`, `nodeSelector`, and `tolerations` fields. Retained existing configurations. Minor formatting adjustments made. |
| charts/vald/values.yaml                     | Added `nodeSelector`, `tolerations`, and `affinity` fields to `manager.index.corrector`, `manager.index.creator`, and `manager.index.saver` sections for improved configurability. |
| charts/vald-helm-operator/crds/valdrelease.yaml | Enhanced CRD schema by adding `affinity`, `nodeSelector`, and `tolerations` properties across `corrector`, `creator`, `saver`, and `manager` sections. Updated existing properties to preserve unknown fields. |
| charts/vald/values.schema.json              | Introduced `affinity`, `nodeSelector`, and `tolerations` properties under `corrector`, `creator`, and `saver` objects for detailed scheduling configurations. |

</details>

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Outside diff range and nitpick comments (2)
charts/vald/values.yaml (2)

3356-3379: LGTM! Well-structured affinity configuration for index corrector.

The affinity configuration follows Kubernetes best practices with support for nodeAffinity, podAffinity and podAntiAffinity scheduling rules.

Consider adding YAML comments with examples of common affinity rules, such as:

# Example node affinity to prefer nodes with SSD:
nodeAffinity:
  preferredDuringSchedulingIgnoredDuringExecution:
    - weight: 100
      preference:
        matchExpressions:
          - key: storage
            operator: In 
            values:
              - ssd

3618-3641: LGTM! Well-structured affinity configuration for index saver.

The affinity configuration matches the other sections, providing a consistent interface for scheduling control across all index management jobs.

The consistent affinity configuration across all three job types (corrector, creator, saver) allows for flexible deployment strategies:

  • Co-locate related jobs using podAffinity
  • Spread jobs across nodes using podAntiAffinity
  • Target specific node pools using nodeAffinity
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 3d51a12 and ed3238b.

📒 Files selected for processing (2)
  • charts/vald/templates/_helpers.tpl (2 hunks)
  • charts/vald/values.yaml (3 hunks)
🔇 Additional comments (2)
charts/vald/values.yaml (1)

3490-3513: LGTM! Well-structured affinity configuration for index creator.

The affinity configuration matches the corrector section, maintaining consistency across job templates.

charts/vald/templates/_helpers.tpl (1)

933-940: Confirmed correct conditional inclusion of nodeSelector and tolerations

The nodeSelector and tolerations fields are properly wrapped in conditional statements and correctly integrated into the job specification, enhancing configurability.

charts/vald/templates/_helpers.tpl Show resolved Hide resolved
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (1)
charts/vald/values.yaml (1)

Line range hint 3430-3719: Consider adding default anti-affinity rules for index management jobs

While the current configuration provides flexibility, consider adding default podAntiAffinity rules to spread index management jobs across nodes for better reliability. This would help prevent multiple resource-intensive index operations from running on the same node.

Example configuration to add to each job:

 podAntiAffinity:
   preferredDuringSchedulingIgnoredDuringExecution:
+    - weight: 100
+      podAffinityTerm:
+        topologyKey: kubernetes.io/hostname
+        labelSelector:
+          matchExpressions:
+            - key: app
+              operator: In
+              values:
+                - vald-index-job
   requiredDuringSchedulingIgnoredDuringExecution: []
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between ed3238b and b2bce16.

📒 Files selected for processing (2)
  • charts/vald/templates/_helpers.tpl (2 hunks)
  • charts/vald/values.yaml (3 hunks)
🔇 Additional comments (5)
charts/vald/templates/_helpers.tpl (2)

934-941: LGTM! Proper conditional inclusion of nodeSelector and tolerations

The implementation correctly handles the conditional inclusion of nodeSelector and tolerations, following Helm template best practices.


899-900: ⚠️ Potential issue

Make affinity inclusion conditional

The affinity configuration should be wrapped in a conditional block to prevent potential issues when .Job.affinity is undefined, similar to how nodeSelector and tolerations are handled.

Apply this diff to make the affinity inclusion conditional:

-      affinity:
-        {{- include "vald.affinity" .Job.affinity | nindent 8 }}
+      {{- if .Job.affinity }}
+      affinity:
+        {{- include "vald.affinity" .Job.affinity | nindent 8 }}
+      {{- end }}
charts/vald/values.yaml (3)

3430-3453: LGTM: Corrector job scheduling configuration follows Kubernetes standards

The affinity, nodeSelector and tolerations configuration for the index corrector job follows standard Kubernetes scheduling patterns and includes:

  • Empty nodeSelector map for node selection flexibility
  • Empty tolerations array for pod scheduling flexibility
  • Standard affinity structure with nodeAffinity, podAffinity and podAntiAffinity

3566-3589: LGTM: Creator job scheduling configuration matches corrector pattern

The configuration structure for the index creator job maintains consistency with the corrector job configuration, using the same patterns for nodeSelector, tolerations and affinity settings.


3696-3719: LGTM: Saver job scheduling configuration maintains consistency

The configuration for the index saver job follows the same pattern as corrector and creator jobs, ensuring consistent scheduling configuration across all index management jobs.

coderabbitai[bot]
coderabbitai bot previously approved these changes Nov 25, 2024
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Outside diff range and nitpick comments (3)
charts/vald/values.schema.json (2)

14074-14128: LGTM! Affinity configurations are well-structured.

The affinity schema implementation across all components (corrector, creator, saver) correctly follows Kubernetes API conventions and includes all necessary fields for node affinity, pod affinity, and pod anti-affinity.

Consider enhancing the schema by adding examples in the descriptions for common affinity rules. This would help users understand how to properly configure these fields.

Also applies to: 16471-16525, 22246-22300


15230-15233: Enhance nodeSelector description for better clarity.

While the nodeSelector schema is correctly implemented, the description could be more informative.

Consider updating the description to better explain the expected format:

-  "description": "node selector"
+  "description": "nodeSelector is the simplest recommended form of node selection constraint. nodeSelector is a field of PodSpec that specifies a map of key-value pairs. For the pod to be eligible to run on a node, the node must have each of the indicated key-value pairs as labels"

Also applies to: 17280-17283, 23051-23054

charts/vald-helm-operator/crds/valdrelease.yaml (1)

Line range hint 8409-14284: Consider adding schema validation for common fields

While the current implementation with x-kubernetes-preserve-unknown-fields provides flexibility, consider adding explicit schema validation for commonly used fields like:

  • operator: Equal/Exists for tolerations
  • effect: NoSchedule/PreferNoSchedule/NoExecute for tolerations
  • weight: number range for preferred scheduling
    This would help catch configuration errors earlier.
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 5a1da48 and fc467f9.

📒 Files selected for processing (2)
  • charts/vald-helm-operator/crds/valdrelease.yaml (9 hunks)
  • charts/vald/values.schema.json (9 hunks)
🔇 Additional comments (4)
charts/vald/values.schema.json (1)

Line range hint 14074-24277: Verify schema consistency with Kubernetes API.

The schema changes consistently implement Kubernetes scheduling configurations across all components. Let's verify the schema against the Kubernetes API specification.

✅ Verification successful

Schema changes align with Kubernetes API specifications

The schema implementation for scheduling configurations (affinity, nodeSelector, tolerations) follows Kubernetes API conventions across all components:

  • Correct type definitions: object for affinity/nodeSelector, array of objects for tolerations
  • Standard structure for node/pod affinity with proper nesting
  • Consistent implementation across gateway, agent, discoverer and manager components
  • Proper schema annotations and documentation
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify schema consistency with Kubernetes API spec

# Find and check other Helm charts in the repository for similar implementations
fd -e yaml -e json . | xargs rg -l "affinity|nodeSelector|tolerations"

# Check for any existing validation rules or examples
fd -e yaml -e json . | xargs rg -A 5 "schema.*affinity|schema.*nodeSelector|schema.*tolerations"

Length of output: 46555

charts/vald-helm-operator/crds/valdrelease.yaml (3)

8409-8453: LGTM! Affinity schema implementation follows Kubernetes standards

The affinity schema implementation for the corrector and saver components is well-structured and follows Kubernetes pod scheduling patterns. The schema includes:

  • Complete nodeAffinity, podAffinity, and podAntiAffinity fields
  • Proper nesting of required/preferred scheduling rules
  • Correct use of x-kubernetes-preserve-unknown-fields for flexibility

Also applies to: 13121-13165


9085-9087: LGTM! Consistent nodeSelector implementation across components

The nodeSelector schema is consistently implemented across corrector, creator, and saver components, properly using x-kubernetes-preserve-unknown-fields to allow arbitrary node labels.

Also applies to: 10258-10260, 13599-13601


9765-9769: LGTM! Tolerations schema matches Kubernetes spec

The tolerations schema is correctly implemented across components:

  • Uses array type for multiple tolerations
  • Preserves unknown fields to support all toleration parameters
  • Maintains consistency across corrector, creator and saver components

Also applies to: 10939-10943, 14280-14284

charts/vald/values.schema.json Show resolved Hide resolved
@takuyaymd takuyaymd force-pushed the feature/manager-index/add-affinities-to-job branch from fc467f9 to d9a7f34 Compare December 2, 2024 07:57
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Outside diff range and nitpick comments (1)
charts/vald/values.schema.json (1)

15230-15233: Add pattern validation for nodeSelector labels

The nodeSelector schema could be enhanced with pattern validation to ensure valid Kubernetes label formats.

Consider updating the schema to validate label keys and values:

 "nodeSelector": {
   "type": "object",
   "description": "node selector",
+  "additionalProperties": {
+    "type": "string",
+    "pattern": "^[a-zA-Z0-9][-._a-zA-Z0-9]*$",
+    "maxLength": 63
+  },
+  "propertyNames": {
+    "pattern": "^([a-z0-9]-?)*[a-z0-9]$/",
+    "maxLength": 253
+  }
 }

Also applies to: 17280-17283, 23051-23054

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between fc467f9 and d9a7f34.

📒 Files selected for processing (2)
  • charts/vald-helm-operator/crds/valdrelease.yaml (9 hunks)
  • charts/vald/values.schema.json (9 hunks)
🔇 Additional comments (4)
charts/vald/values.schema.json (1)

16452-16456: Define detailed schema for toleration objects

The current tolerations schema needs to be more specific to match Kubernetes API conventions.

This issue was previously identified in a past review. Please refer to the existing comment for the detailed schema suggestion that includes specific fields for tolerations such as key, operator, value, effect, and tolerationSeconds.

Also applies to: 18502-18506, 24273-24277

charts/vald-helm-operator/crds/valdrelease.yaml (3)

8409-8453: LGTM! Well-structured scheduling controls added

The addition of affinity configurations to the index manager components (corrector, creator, and saver) is well implemented. The schema properly follows Kubernetes scheduling patterns and provides comprehensive control over pod placement through:

  • Node affinity for node-level scheduling constraints
  • Pod affinity/anti-affinity for pod-level placement rules
  • Both preferred and required scheduling policies

This enhancement will give users better control over workload placement and resource utilization.

Also applies to: 9778-9822, 13121-13165


9085-9087: LGTM! Complete scheduling configuration

The nodeSelector fields are properly defined alongside affinity, providing a simpler alternative for basic node selection requirements. The schema correctly preserves unknown fields to maintain flexibility in node label selections.

Also applies to: 10258-10260, 13599-13601


9765-9769: LGTM! Tolerations properly integrated

The tolerations arrays are well-structured and properly integrated into each component's configuration. The schema correctly allows for standard Kubernetes toleration specifications while preserving unknown fields for flexibility.

Also applies to: 10939-10943, 14280-14284

charts/vald/values.schema.json Show resolved Hide resolved
@takuyaymd takuyaymd force-pushed the feature/manager-index/add-affinities-to-job branch from 78f32a8 to 9583df7 Compare December 2, 2024 09:00
@CLAassistant
Copy link

CLAassistant commented Dec 2, 2024

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.
10 out of 11 committers have signed the CLA.

✅ takuyaymd
✅ vankichi
✅ kpango
✅ kmrmt
✅ vdaas-ci
✅ hlts2
✅ highpon
✅ cuishuang
✅ datelier
✅ iammytoo
❌ smorihira
You have signed the CLA already but the status is still pending? Let us recheck it.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (5)
charts/vald/values.schema.json (1)

15230-15233: Enhance nodeSelector schema validation

While the nodeSelector schema is structurally correct, it could benefit from more specific validation for Kubernetes label key/value pairs.

Consider updating the schema to include validation for label format:

 "nodeSelector": {
   "type": "object",
-  "description": "node selector"
+  "description": "nodeSelector is the node labels for pod assignment",
+  "patternProperties": {
+    "^[a-z0-9A-Z][a-z0-9A-Z._/-]*[a-z0-9A-Z]$": {
+      "type": "string",
+      "pattern": "^[a-z0-9A-Z][a-z0-9A-Z._/-]*[a-z0-9A-Z]$",
+      "maxLength": 63
+    }
+  },
+  "additionalProperties": false
 }

Also applies to: 17280-17283, 23051-23054

charts/vald/values.yaml (3)

3430-3453: Well-structured affinity configuration for index corrector job

The affinity configuration is properly structured with support for nodeAffinity, podAffinity and podAntiAffinity. This provides full flexibility for controlling pod scheduling.

Consider adding a default podAntiAffinity rule to spread corrector job pods across nodes:

 podAntiAffinity:
   preferredDuringSchedulingIgnoredDuringExecution:
+    - weight: 100
+      podAffinityTerm:
+        topologyKey: kubernetes.io/hostname
+        labelSelector:
+          matchExpressions:
+            - key: app
+              operator: In 
+              values:
+                - vald-index-correction
   requiredDuringSchedulingIgnoredDuringExecution: []

3566-3589: Well-structured affinity configuration for index creator job

The affinity configuration is properly structured with support for nodeAffinity, podAffinity and podAntiAffinity. This provides full flexibility for controlling pod scheduling.

Consider adding a default podAntiAffinity rule to spread creator job pods across nodes:

 podAntiAffinity:
   preferredDuringSchedulingIgnoredDuringExecution:
+    - weight: 100
+      podAffinityTerm:
+        topologyKey: kubernetes.io/hostname
+        labelSelector:
+          matchExpressions:
+            - key: app
+              operator: In
+              values:
+                - vald-index-creation
   requiredDuringSchedulingIgnoredDuringExecution: []

3696-3719: Well-structured affinity configuration for index saver job

The affinity configuration is properly structured with support for nodeAffinity, podAffinity and podAntiAffinity. This provides full flexibility for controlling pod scheduling.

Consider adding a default podAntiAffinity rule to spread saver job pods across nodes:

 podAntiAffinity:
   preferredDuringSchedulingIgnoredDuringExecution:
+    - weight: 100
+      podAffinityTerm:
+        topologyKey: kubernetes.io/hostname
+        labelSelector:
+          matchExpressions:
+            - key: app
+              operator: In
+              values:
+                - vald-index-save
   requiredDuringSchedulingIgnoredDuringExecution: []
charts/vald-helm-operator/crds/valdrelease.yaml (1)

8409-8453: Consider adding scheduling examples in documentation

While the schema is well-defined, it would be helpful to provide examples of common affinity configurations in the documentation to guide users in effectively utilizing these new scheduling capabilities.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between d9a7f34 and 55f62a5.

📒 Files selected for processing (4)
  • charts/vald-helm-operator/crds/valdrelease.yaml (9 hunks)
  • charts/vald/templates/_helpers.tpl (2 hunks)
  • charts/vald/values.schema.json (9 hunks)
  • charts/vald/values.yaml (3 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • charts/vald/templates/_helpers.tpl
🔇 Additional comments (4)
charts/vald/values.schema.json (2)

14074-14128: 🛠️ Refactor suggestion

Enhance affinity schema validation

The affinity schema structure is correct but could benefit from more specific validation rules to match Kubernetes API conventions.

As mentioned in the previous review, consider updating the schema to include detailed field definitions for affinity terms. For example:

 "preferredDuringSchedulingIgnoredDuringExecution": {
   "type": "array",
   "description": "node affinity preferred scheduling terms",
-  "items": { "type": "object" }
+  "items": {
+    "type": "object",
+    "required": ["weight", "preference"],
+    "properties": {
+      "weight": {
+        "type": "integer",
+        "minimum": 1,
+        "description": "Weight associated with matching the corresponding nodeSelectorTerm"
+      },
+      "preference": {
+        "type": "object",
+        "properties": {
+          "matchExpressions": {
+            "type": "array",
+            "items": {
+              "type": "object",
+              "required": ["key", "operator"],
+              "properties": {
+                "key": { "type": "string" },
+                "operator": {
+                  "type": "string",
+                  "enum": ["In", "NotIn", "Exists", "DoesNotExist", "Gt", "Lt"]
+                },
+                "values": {
+                  "type": "array",
+                  "items": { "type": "string" }
+                }
+              }
+            }
+          }
+        }
+      }
+    }
+  }

Also applies to: 16471-16525, 22246-22300


16452-16456: 🛠️ Refactor suggestion

Enhance tolerations schema validation

The tolerations schema structure is correct but could benefit from more specific validation rules to match Kubernetes API conventions.

As mentioned in the previous review, consider updating the schema to include specific fields for toleration objects:

 "tolerations": {
   "type": "array",
   "description": "tolerations",
-  "items": { "type": "object" }
+  "items": {
+    "type": "object",
+    "properties": {
+      "key": {
+        "type": "string",
+        "description": "The taint key that the toleration applies to"
+      },
+      "operator": {
+        "type": "string",
+        "description": "Operator represents a key's relationship to the value",
+        "enum": ["Equal", "Exists"]
+      },
+      "value": {
+        "type": "string",
+        "description": "Value is the taint value the toleration matches to"
+      },
+      "effect": {
+        "type": "string",
+        "description": "Effect indicates the taint effect to match",
+        "enum": ["NoSchedule", "PreferNoSchedule", "NoExecute"]
+      },
+      "tolerationSeconds": {
+        "type": "integer",
+        "minimum": 0,
+        "description": "TolerationSeconds represents the period of time the toleration tolerates the taint"
+      }
+    }
+  }
 }

Also applies to: 18502-18506, 24273-24277

charts/vald-helm-operator/crds/valdrelease.yaml (2)

8409-8453: LGTM! Well-structured addition of Kubernetes scheduling fields

The addition of affinity fields (nodeAffinity, podAffinity, podAntiAffinity) to the corrector, creator, and saver components follows Kubernetes scheduling API standards and best practices. The implementation:

  • Maintains consistency across all components
  • Supports both required and preferred scheduling rules
  • Preserves unknown fields for extensibility

Also applies to: 9778-9822, 13121-13165


8409-8453: Verify schema validation with example CRs

Let's verify the schema works correctly with example CRs.

Also applies to: 9778-9822, 13121-13165

✅ Verification successful

Schema validation for affinity configuration is correct

The affinity schema in the CRD matches the structure used in the actual Kubernetes deployments. The schema correctly defines all the required fields:

  • nodeAffinity with preferredDuringSchedulingIgnoredDuringExecution
  • podAffinity with both preferred and required scheduling rules
  • podAntiAffinity with both preferred and required scheduling rules

The example deployments demonstrate valid usage of this schema, particularly showing proper nesting of affinity terms, label selectors, and topology keys.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for example CRs using these new fields
# Test: Look for example CR files that might need updating
fd -e yaml -e yml . | grep -v charts | xargs grep -l "kind: ValdRelease"

Length of output: 114


Script:

#!/bin/bash
# Let's check the content of the example CR to see if it contains affinity configurations
cat k8s/operator/helm/crds/valdrelease.yaml

Length of output: 65611


Script:

#!/bin/bash
# Let's check for example CRs in examples directory
fd -e yaml -e yml . examples/ | xargs grep -l "kind: ValdRelease" || true

Length of output: 171


Script:

#!/bin/bash
# Let's check for example CRs in the repository root
fd -e yaml -e yml . | grep -v "charts\|test" | xargs grep -l "affinity:" || true

Length of output: 441


Script:

#!/bin/bash
# Let's examine the affinity configuration in some of these files to verify schema compatibility
cat k8s/agent/statefulset.yaml k8s/gateway/gateway/lb/deployment.yaml | grep -A 20 "affinity:"

Length of output: 1690

takuyaymd and others added 4 commits December 2, 2024 18:06
* 🐛 ✅ Fix checkFun condition

Signed-off-by: vankichi <[email protected]>

* ♻️ Fix by feedback

Signed-off-by: vankichi <[email protected]>

* ♻️ Fix by feedback

Signed-off-by: vankichi <[email protected]>

---------

Signed-off-by: vankichi <[email protected]>
* chore: add an actions/gen folder to hack

* feat: add a generator of docker image building workflows

---------

Co-authored-by: Hiroto Funakoshi <[email protected]>
* fix: bugfix flush logic

Signed-off-by: hlts2 <[email protected]>

* fix: nil check for flushing

Signed-off-by: hlts2 <[email protected]>

* fix: add flush check logic

Signed-off-by: hlts2 <[email protected]>

* fix: nil check bug

Signed-off-by: hlts2 <[email protected]>

* fix: add nil check

Signed-off-by: hlts2 <[email protected]>

* fix: return err when the flush process is executing

Signed-off-by: hlts2 <[email protected]>

* fix: add error check for flushing

Signed-off-by: hlts2 <[email protected]>

* fix: error message

Signed-off-by: hlts2 <[email protected]>

* fix: disable kvs and vqueue initialization

Signed-off-by: hlts2 <[email protected]>

* fix: disable commentout

Signed-off-by: hlts2 <[email protected]>

* fix: disable kvs and vq

Signed-off-by: hlts2 <[email protected]>

* fix: nil set to kvs and vq

Signed-off-by: hlts2 <[email protected]>

* fix: copy ngt service object for flushing

Signed-off-by: hlts2 <[email protected]>

* fix: deleted unnecessary nil check

Signed-off-by: hlts2 <[email protected]>

* fix: variable name

Signed-off-by: hlts2 <[email protected]>

---------

Signed-off-by: hlts2 <[email protected]>
Co-authored-by: Yusuke Kato <[email protected]>
vdaas-ci and others added 25 commits December 2, 2024 18:06
* 🐳 add example-client docker image

Signed-off-by: vankichi <[email protected]>

* 🐛 fix build error

Signed-off-by: vankichi <[email protected]>

* ♻️ fix

Signed-off-by: vankichi <[email protected]>

* ♻️ fix

Signed-off-by: vankichi <[email protected]>

* ♻️ fix

Signed-off-by: vankichi <[email protected]>

---------

Signed-off-by: vankichi <[email protected]>
Signed-off-by: vdaas-ci <[email protected]>
Co-authored-by: Kosuke Morimoto <[email protected]>
* 💚 Add auto deps version update workflow

Signed-off-by: vankichi <[email protected]>

* 💚 Update make commands

Signed-off-by: vankichi <[email protected]>

* 💚 Fix

Signed-off-by: vankichi <[email protected]>

* 💚 Add make permission

Signed-off-by: vankichi <[email protected]>

* 💚 Add labels

Signed-off-by: vankichi <[email protected]>

* 💚 Fix

Signed-off-by: vankichi <[email protected]>

---------

Signed-off-by: vankichi <[email protected]>
* 🐛 Fix bind DOCKER_OPTS option

Signed-off-by: vankichi <[email protected]>

* Fix

---------

Signed-off-by: vankichi <[email protected]>
* feat: Implement delete expired index job

* fix: Replace agent Remove RPC to Vald Remove RPC

* fix: service name

* fix: typo

* fix: log method

* fix: variable name

* fix: use internal package

* fix: Change struct field name
* docs: update README.md [skip ci]

* docs: update .all-contributorsrc [skip ci]

* style: format code with Gofumpt and Prettier

This commit fixes the style issues introduced in fe459ca according to the output
from Gofumpt and Prettier.

Details: vdaas#2721

---------

Co-authored-by: allcontributors[bot] <46447321+allcontributors[bot]@users.noreply.github.com>
Co-authored-by: Kosuke Morimoto <[email protected]>
Co-authored-by: deepsource-autofix[bot] <62050782+deepsource-autofix[bot]@users.noreply.github.com>
Co-authored-by: Yusuke Kato <[email protected]>
Signed-off-by: Yusuke Kato <[email protected]>
Co-authored-by: Kiichiro YUKAWA <[email protected]>
* 🐛 Fix installation command for arm64

Signed-off-by: vankichi <[email protected]>

* 🐛 Fix typo

Signed-off-by: vankichi <[email protected]>

* 🐛 Fix

Signed-off-by: vankichi <[email protected]>

---------

Signed-off-by: vankichi <[email protected]>
* refactor dockerfiles and update gitattributes

Signed-off-by: kpango <[email protected]>

* Update .gitattributes

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Signed-off-by: Yusuke Kato <[email protected]>

---------

Signed-off-by: kpango <[email protected]>
Signed-off-by: Yusuke Kato <[email protected]>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
* ♻️ update ubuntu version for devcontainer

Signed-off-by: vankichi <[email protected]>

* 💚 add platform opts

Signed-off-by: vankichi <[email protected]>

* ♻️ Remove unnecessary installation

Signed-off-by: vankichi <[email protected]>

* ♻️ Change install helm command

Signed-off-by: vankichi <[email protected]>

* ♻️ Fix

Signed-off-by: vankichi <[email protected]>

* ♻️ Fix

Signed-off-by: vankichi <[email protected]>

* ♻️ Fix

Signed-off-by: vankichi <[email protected]>

* 💚 remove platform opts

Signed-off-by: vankichi <[email protected]>

---------

Signed-off-by: vankichi <[email protected]>
Co-authored-by: Yusuke Kato <[email protected]>
add a contition

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Signed-off-by: takuya <[email protected]>
@takuyaymd takuyaymd force-pushed the feature/manager-index/add-affinities-to-job branch from 55f62a5 to 95412f9 Compare December 2, 2024 09:06
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (1)
charts/vald-helm-operator/crds/valdrelease.yaml (1)

8409-8453: Consider adding schema examples in documentation.

While the schema implementation is solid, consider adding examples in the documentation to help users understand how to effectively use these new affinity configurations for different scheduling scenarios.

Also applies to: 9778-9822, 13121-13165

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 55f62a5 and db48cd5.

📒 Files selected for processing (4)
  • charts/vald-helm-operator/crds/valdrelease.yaml (9 hunks)
  • charts/vald/templates/_helpers.tpl (2 hunks)
  • charts/vald/values.schema.json (9 hunks)
  • charts/vald/values.yaml (3 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • charts/vald/values.schema.json
  • charts/vald/values.yaml
🔇 Additional comments (4)
charts/vald/templates/_helpers.tpl (3)

936-939: LGTM: Properly conditional nodeSelector block

The nodeSelector block is correctly implemented with conditional inclusion.


940-943: LGTM: Properly conditional tolerations block

The tolerations block is correctly implemented with conditional inclusion.


899-902: ⚠️ Potential issue

Make affinity inclusion conditional

The affinity block should be conditionally included to prevent potential issues when .Job.affinity is undefined.

Apply this diff to make the affinity inclusion conditional:

      {{- if .Job.affinity }}
      affinity:
        {{- include "vald.affinity" .Job.affinity | nindent 8 }}
      {{- end }}
charts/vald-helm-operator/crds/valdrelease.yaml (1)

8409-8453: LGTM! Well-structured affinity configuration implementation.

The affinity schema implementation across corrector, creator, and saver components follows Kubernetes best practices and provides comprehensive scheduling control:

  • Proper support for nodeAffinity, podAffinity, and podAntiAffinity
  • Consistent structure across all components
  • Follows standard Kubernetes affinity configuration patterns

Also applies to: 9778-9822, 13121-13165

@takuyaymd takuyaymd closed this Dec 2, 2024
@takuyaymd takuyaymd deleted the feature/manager-index/add-affinities-to-job branch December 3, 2024 05:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.