Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Schedule the Pods on the same node (each time) #2374

Open
michaelfresco opened this issue Dec 22, 2024 · 0 comments
Open

Schedule the Pods on the same node (each time) #2374

michaelfresco opened this issue Dec 22, 2024 · 0 comments

Comments

@michaelfresco
Copy link

Problem description

I'm trying to predetermine the names of the pods, and where they show up.
(on what node). My node names are: deb-1, deb-2, deb-3, deb-4, deb-5.
I want a predictable structure. Yet the pods run on a different node with
each deployment.

Actual:

myminio-pool-0-0 @ deb-3
myminio-pool-0-1 @ deb-5
myminio-pool-0-2 @ deb-4
myminio-pool-0-3 @ deb-1
myminio-pool-0-4 @ deb-2

Desired:

myminio-pool-0-0 @ deb-1
myminio-pool-0-1 @ deb-2
myminio-pool-0-2 @ deb-3
myminio-pool-0-3 @ deb-4
myminio-pool-0-4 @ deb-5

Managing the data becomes much easier if the pods (along with PVCs and PVs)
are scheduled on the same node each time you create the tenant.

Describe the solution you'd like

A nodeSelector template would be ideal:

    nodeSelector:
      name: deb-{{ .Instance }}

Describe alternatives you've considered

I have consulted the following pages:

Additional context

I started with:

kubectl kustomize https://github.com/minio/operator/examples/kustomization/base/ > tenant-base.yaml

My tenant section looks like this:

apiVersion: minio.min.io/v2
kind: Tenant
metadata:
  annotations:
    prometheus.io/path: /minio/v2/metrics/cluster
    prometheus.io/port: "9000"
    prometheus.io/scrape: "true"
  labels:
    app: minio
  name: myminio
  namespace: minio-tenant
spec:
  certConfig: {}
  configuration:
    name: storage-configuration
  env: []
  externalCaCertSecret: []
  externalCertSecret: []
  externalClientCertSecrets: []
  features:
    bucketDNS: false
    domains: {}
  image: quay.io/minio/minio:RELEASE.2024-10-02T17-50-41Z
  imagePullSecret: {}
  mountPath: /export
  podManagementPolicy: Parallel
  pools:
  - affinity:
      # Here we select only nodes in set (deb-1, .. deb-5)
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - deb-1
              - deb-2
              - deb-3
              - deb-4
              - deb-5
      podAffinity: {}
      # Tries to prevent 2 pods with same label on same node
      podAntiAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchLabels:
                app: minio
            topologyKey: "kubernetes.io/hostname"
            # The topologyKey defines the domain within which the anti-affinity rule
            # rule applies. In this case, the key is "kubernetes.io/hostname", which means
            # the rule is applied within the scope of individual nodes in the Kubernetes
            # cluster (the hostname of each node).
    containerSecurityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
        - ALL
      runAsGroup: 1000
      runAsNonRoot: true
      runAsUser: 1000
      seccompProfile:
        type: RuntimeDefault
    name: pool-0
    nodeSelector:
      pool: zero
    resources: {}
    securityContext:
      fsGroup: 1000
      fsGroupChangePolicy: OnRootMismatch
      runAsGroup: 1000
      runAsNonRoot: true
      runAsUser: 1000
    servers: 5
    tolerations: []
    topologySpreadConstraints: []
    volumeClaimTemplate:
      apiVersion: v1
      kind: persistentvolumeclaims
      metadata: {}
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 8000Mi
        storageClassName: ssd-local-path-delete
      status: {}
    volumesPerServer: 1
  priorityClassName: ""
  requestAutoCert: true
  serviceAccountName: ""
  serviceMetadata:
    consoleServiceAnnotations: {}
    consoleServiceLabels: {}
    minioServiceAnnotations: {}
    minioServiceLabels: {}
  subPath: ""
  users:
  - name: storage-user
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant