Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Annotations failed #34

Closed
ialejandro opened this issue Aug 26, 2024 · 4 comments
Closed

Annotations failed #34

ialejandro opened this issue Aug 26, 2024 · 4 comments
Assignees
Labels
bug Something isn't working

Comments

@ialejandro
Copy link
Member

I still have the error with this ver 😿, but my PR #32 has fixed it btw

my annotation:

connectors:
- name: splunk
  enabled: true
  replicas: 1
  podAnnotations:
    vault.security.banzaicloud.io/vault-addr: "https://vault.test:8200"
    vault.security.banzaicloud.io/vault-path: "kubernetes_itsec"
    vault.security.banzaicloud.io/vault-role: "opencti"
    vault.security.banzaicloud.io/vault-skip-verify: "true"
  image:
    repository: opencti/connector-splunk
    tag: "6.2.15"
    ...
 Error: Failed to render chart: exit status 1: Error: unable to build kubernetes objects from release manifest: error validating "":
 error validating data: [ValidationError(Deployment.spec.template.metadata.annotations.Capabilities): invalid type for
 io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta.annotations: got "map", expected "string", ValidationError(Deployment.spec.template.metadata.annotations.Chart): invalid type for io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta.annotations:
 got "map", expected "string", ValidationError(Deployment.spec.template.metadata.annotations.Files): invalid type for io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta.annotations: got "map", expected "string", ValidationError(
 Deployment.spec.template.metadata.annotations.Release): invalid type for io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta.annotations: got "map", expected "string", ValidationError(Deployment.spec.template.metadata.annotations.Subcharts): invalid type for 
 io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta.annotations: got "map", expected "string", ValidationError(
 Deployment.spec.template.metadata.annotations.Template): invalid type for 
 io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta.annotations: got "map", expected "string", ValidationError(
 Deployment.spec.template.metadata.annotations.Values): invalid type for io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta.annotations: 
 got "map", expected "string"]

Originally posted by @r4zr1 in #33 (comment)

@ialejandro
Copy link
Member Author

Is weird @r4zr1. Can you paste here full values file? Ofuscate sensible data, I would like to test with your values and add differents values on CICD testing and linter step.

@ialejandro ialejandro added the bug Something isn't working label Aug 26, 2024
@r4zr1
Copy link

r4zr1 commented Aug 27, 2024

@ialejandro,

Here it is:

replicaCount: 1
fullnameOverride: opencti

image:
  repository: opencti/platform
  tag: "6.2.15"
  pullPolicy: IfNotPresent
serviceAccount:
  create: true
  name: opencti
  automountServiceAccountToken: true

podAnnotations:
  vault.security.banzaicloud.io/vault-addr: https://vault.test.env:8200
  vault.security.banzaicloud.io/vault-path: kubernetes_test2-rke-test-env
  vault.security.banzaicloud.io/vault-role: opencti
  vault.security.banzaicloud.io/vault-skip-verify: "true"

ingress:
  enabled: true
  className: "nginx"
  hosts:
    - host: opencti.test.env
      paths:
        - path: /
          pathType: Prefix
  tls:
   - secretName: ingress-test-tls
     hosts:
       - opencti.test.env

env:
  APP__TELEMETRY__METRICS__ENABLED: true
  APP__GRAPHQL__PLAYGROUND__ENABLED: false
  APP__GRAPHQL__PLAYGROUND__FORCE_DISABLED_INTROSPECTION: false
  APP__ADMIN__EMAIL: vault:test/data/soc/opencti#APP__ADMIN__EMAIL
  APP__ADMIN__PASSWORD: vault:test/data/soc/opencti#APP__ADMIN__PASSWORD
  APP__ADMIN__TOKEN: vault:test/data/soc/opencti#APP__ADMIN__TOKEN
  APP__HEALTH_ACCESS_KEY: 11111111-1111-1111-1111-111111111
  APP__APP_LOGS__LOGS_LEVEL: error
  NODE_TLS_REJECT_UNAUTHORIZED: 0
  APP__BASE_PATH: "/"
  ELASTICSEARCH__ENGINE_SELECTOR: elk
  ELASTICSEARCH__URL: http://opencti-elasticsearch:9200
  MINIO__ENDPOINT: ny-minio-04.test.env
  MINIO__PORT: 443
  MINIO__USE_SSL: true
  MINIO__ACCESS_KEY: vault:test/data/soc/opencti#MINIO__ACCESS_KEY
  MINIO__SECRET_KEY: vault:test/data/soc/opencti#MINIO__SECRET_KEY
  MINIO__BUCKET_NAME: opencti-test
  MINIO__BUCKET_REGION: ams-1
  RABBITMQ__HOSTNAME: opencti-rabbitmq
  RABBITMQ__PORT_MANAGEMENT: 15672
  RABBITMQ__PORT: 5672
  RABBITMQ__USERNAME: user
  RABBITMQ__PASSWORD: vault:test/data/soc/opencti#RABBITMQ__PASSWORD
  REDIS__HOSTNAME: opencti-redis-master
  REDIS__MODE: single
  REDIS__PORT: 6379
  PROVIDERS__OPENID__STRATEGY: OpenIDConnectStrategy
  PROVIDERS__OPENID__CONFIG__LABEL: "Login with Okta"
  PROVIDERS__OPENID__CONFIG__ISSUER: https://test.okta.com
  PROVIDERS__OPENID__CONFIG__CLIENT_ID: vault:test/data/soc/opencti#PROVIDERS__OPENID__CONFIG__CLIENT_ID
  PROVIDERS__OPENID__CONFIG__CLIENT_SECRET: vault:test/data/soc/opencti#PROVIDERS__OPENID__CONFIG__CLIENT_SECRET
  PROVIDERS__OPENID__CONFIG__REDIRECT_URIS: "[\"https://opencti.test.env/auth/oic/callback\"]"
  PROVIDERS__OPENID__CONFIG__LOGOUT_REMOTE: false
  PROVIDERS__OPENID__CONFIG__GROUPS_MANAGEMENT__GROUPS_SCOPE: groups

testConnection: false

serviceMonitor:
  enabled: false

autoscaling:
  enabled: false

worker:
  enabled: true
  autoscaling:
    enabled: false
  podAnnotations:
    vault.security.banzaicloud.io/vault-addr: https://vault.test.env:8200
    vault.security.banzaicloud.io/vault-path: kubernetes_test2-rke-test-env
    vault.security.banzaicloud.io/vault-role: soc-opencti
    vault.security.banzaicloud.io/vault-skip-verify: "true"
  image:
    repository: opencti/worker
    tag: "6.2.15"
    pullPolicy: IfNotPresent

elasticsearch:
  fullnameOverride: opencti-elasticsearch
  master:
    persistence:
      enabled: true
      size: 16Gi
    resources:
      limits:
        cpu: 1
        memory: 2Gi
      requests:
        cpu: 500m
        memory: 1Gi
  data:
    persistence:
      enabled: true
      size: 32Gi
    resources:
      limits:
        cpu: 1
        memory: 2Gi
      requests:
        cpu: 500m
        memory: 1Gi
    heapSize: 512m
  extraConfig:
    node:
      store:
        allow_mmap: false

minio:
  enabled: false
  fullnameOverride: opencti-minio

rabbitmq:
  enabled: true
  replicaCount: 1
  resources:
    limits:
      cpu: 1
      memory: 2Gi
    requests:
      cpu: 500m
      memory: 1Gi
  clustering:
    enabled: false
  fullnameOverride: opencti-rabbitmq
  persistence:
    enabled: true
    size: 8Gi
  auth:
    existingPasswordSecret: opencti-rabbitmq-secret
    existingErlangSecret: opencti-rabbitmq-secret

redis:
  master:
    persistence:
      enabled: true
      size: 32Gi
    resources:
      limits:
        cpu: 1
        memory: 1Gi
      requests:
        cpu: 500m
        memory: 1Gi
  fullnameOverride: opencti-redis

readyChecker:
  enabled: true
  retries: 30
  timeout: 5
  services:
  - name: elasticsearch
    port: 9200
  - name: rabbitmq
    port: 5672
  - name: redis-master
    port: 6379

connectors:
- name: splunk
  enabled: true
  replicas: 1
  podAnnotations:
    vault.security.banzaicloud.io/vault-addr: "https://vault.test.env:8200"
    vault.security.banzaicloud.io/vault-path: "kubernetes_test2-rke-test-env"
    vault.security.banzaicloud.io/vault-role: "opencti"
    vault.security.banzaicloud.io/vault-skip-verify: "true"
  image:
    repository: opencti/connector-splunk
    tag: "6.2.15"
  env:
    SPLUNK_URL: https://siem-api.test.io
    SPLUNK_TOKEN: vault:test/data/soc/opencti#SPLUNK_TOKEN
    SPLUNK_OWNER: nobody
    SPLUNK_SSL_VERIFY: true
    SPLUNK_APP: search
    SPLUNK_KV_STORE_NAME: opencti
    SPLUNK_IGNORE_TYPES: "attack-pattern,campaign,course-of-action,data-component,data-source,external-reference,identity,intrusion-set,kill-chain-phase,label,location,malware,marking-definition,relationship,threat-actor,tool,vocabulary,vulnerability"
    CONNECTOR_ID: 11111111-1111-1111-1111-111111111
    CONNECTOR_LIVE_STREAM_ID: live
    CONNECTOR_LIVE_STREAM_LISTEN_DELETE: true
    CONNECTOR_LIVE_STREAM_NO_DEPENDENCIES: true
    CONNECTOR_NAME: "OpenCTI Splunk Connector"
    CONNECTOR_SCOPE: splunk
    CONNECTOR_CONFIDENCE_LEVEL: 2
    CONNECTOR_LOG_LEVEL: error
    CONNECTOR_TYPE: STREAM

@ialejandro
Copy link
Member Author

Hmmm @r4zr1 I passed again with the new Helm chart version:

---
# Source: opencti/charts/elasticsearch/templates/data/networkpolicy.yaml
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: opencti-elasticsearch-data
  namespace: "default"
  labels:
    app.kubernetes.io/instance: opencti
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: elasticsearch
    app.kubernetes.io/version: 8.15.0
    helm.sh/chart: elasticsearch-21.3.8
    app.kubernetes.io/component: data
spec:
  podSelector:
    matchLabels:
      app.kubernetes.io/instance: opencti
      app.kubernetes.io/name: elasticsearch
      app.kubernetes.io/component: data
  policyTypes:
    - Ingress
    - Egress
  egress:
    - {}
  ingress:
    - ports:
        - port: 9200
        - port: 9300
---
# Source: opencti/charts/elasticsearch/templates/master/networkpolicy.yaml
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: opencti-elasticsearch-master
  namespace: "default"
  labels:
    app.kubernetes.io/instance: opencti
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: elasticsearch
    app.kubernetes.io/version: 8.15.0
    helm.sh/chart: elasticsearch-21.3.8
    app.kubernetes.io/component: master
spec:
  podSelector:
    matchLabels:
      app.kubernetes.io/instance: opencti
      app.kubernetes.io/name: elasticsearch
      app.kubernetes.io/component: master
  policyTypes:
    - Ingress
    - Egress
  egress:
    - {}
  ingress:
    - ports:
        - port: 9200
        - port: 9300
---
# Source: opencti/charts/rabbitmq/templates/networkpolicy.yaml
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: opencti-rabbitmq
  namespace: "default"
  labels:
    app.kubernetes.io/instance: opencti
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: rabbitmq
    app.kubernetes.io/version: 3.13.6
    helm.sh/chart: rabbitmq-14.6.6
spec:
  podSelector:
    matchLabels:
      app.kubernetes.io/instance: opencti
      app.kubernetes.io/name: rabbitmq
  policyTypes:
    - Ingress
    - Egress
  egress:
    - {}
  ingress:
    # Allow inbound connections to RabbitMQ
    - ports:
        - port: 4369
        - port: 5672
        - port: 5671
        - port: 25672
        - port: 15672
---
# Source: opencti/charts/redis/templates/networkpolicy.yaml
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: opencti-redis
  namespace: "default"
  labels:
    app.kubernetes.io/instance: opencti
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: redis
    app.kubernetes.io/version: 7.2.5
    helm.sh/chart: redis-19.6.4
spec:
  podSelector:
    matchLabels:
      app.kubernetes.io/instance: opencti
      app.kubernetes.io/name: redis
  policyTypes:
    - Ingress
    - Egress
  egress:
    - {}
  ingress:
    # Allow inbound connections
    - ports:
        - port: 6379
---
# Source: opencti/charts/elasticsearch/templates/data/pdb.yaml
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: opencti-elasticsearch-data
  namespace: "default"
  labels:
    app.kubernetes.io/instance: opencti
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: elasticsearch
    app.kubernetes.io/version: 8.15.0
    helm.sh/chart: elasticsearch-21.3.8
    app.kubernetes.io/component: data
spec:
  maxUnavailable: 1
  selector:
    matchLabels:
      app.kubernetes.io/instance: opencti
      app.kubernetes.io/name: elasticsearch
      app.kubernetes.io/component: data
---
# Source: opencti/charts/elasticsearch/templates/master/pdb.yaml
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: opencti-elasticsearch-master
  namespace: "default"
  labels:
    app.kubernetes.io/instance: opencti
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: elasticsearch
    app.kubernetes.io/version: 8.15.0
    helm.sh/chart: elasticsearch-21.3.8
    app.kubernetes.io/component: master
spec:
  maxUnavailable: 1
  selector:
    matchLabels:
      app.kubernetes.io/instance: opencti
      app.kubernetes.io/name: elasticsearch
      app.kubernetes.io/component: master
---
# Source: opencti/charts/rabbitmq/templates/pdb.yaml
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: opencti-rabbitmq
  namespace: "default"
  labels:
    app.kubernetes.io/instance: opencti
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: rabbitmq
    app.kubernetes.io/version: 3.13.6
    helm.sh/chart: rabbitmq-14.6.6
spec:
  maxUnavailable: 1
  selector:
    matchLabels:
      app.kubernetes.io/instance: opencti
      app.kubernetes.io/name: rabbitmq
---
# Source: opencti/charts/redis/templates/master/pdb.yaml
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: opencti-redis-master
  namespace: "default"
  labels:
    app.kubernetes.io/instance: opencti
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: redis
    app.kubernetes.io/version: 7.2.5
    helm.sh/chart: redis-19.6.4
    app.kubernetes.io/component: master
spec:
  maxUnavailable: 1
  selector:
    matchLabels:
      app.kubernetes.io/instance: opencti
      app.kubernetes.io/name: redis
      app.kubernetes.io/component: master
---
# Source: opencti/charts/elasticsearch/templates/data/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: opencti-elasticsearch-data
  namespace: "default"
  labels:
    app.kubernetes.io/instance: opencti
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: elasticsearch
    app.kubernetes.io/version: 8.15.0
    helm.sh/chart: elasticsearch-21.3.8
    app.kubernetes.io/component: data
automountServiceAccountToken: false
---
# Source: opencti/charts/elasticsearch/templates/master/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: opencti-elasticsearch-master
  namespace: "default"
  labels:
    app.kubernetes.io/instance: opencti
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: elasticsearch
    app.kubernetes.io/version: 8.15.0
    helm.sh/chart: elasticsearch-21.3.8
    app.kubernetes.io/component: master
automountServiceAccountToken: false
---
# Source: opencti/charts/rabbitmq/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: opencti-rabbitmq
  namespace: "default"
  labels:
    app.kubernetes.io/instance: opencti
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: rabbitmq
    app.kubernetes.io/version: 3.13.6
    helm.sh/chart: rabbitmq-14.6.6
automountServiceAccountToken: false
secrets:
  - name: opencti-rabbitmq-secret
---
# Source: opencti/charts/redis/templates/master/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
automountServiceAccountToken: false
metadata:
  name: opencti-redis-master
  namespace: "default"
  labels:
    app.kubernetes.io/instance: opencti
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: redis
    app.kubernetes.io/version: 7.2.5
    helm.sh/chart: redis-19.6.4
---
# Source: opencti/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: opencti
  labels:
    helm.sh/chart: opencti-1.2.18
    app.kubernetes.io/name: opencti
    app.kubernetes.io/instance: opencti
    app.kubernetes.io/version: "6.2.15"
    app.kubernetes.io/managed-by: Helm
automountServiceAccountToken: true
---
# Source: opencti/charts/rabbitmq/templates/config-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: opencti-rabbitmq-config
  namespace: "default"
  labels:
    app.kubernetes.io/instance: opencti
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: rabbitmq
    app.kubernetes.io/version: 3.13.6
    helm.sh/chart: rabbitmq-14.6.6
type: Opaque
data:
  rabbitmq.conf: |-
    IyMgVXNlcm5hbWUgYW5kIHBhc3N3b3JkCmRlZmF1bHRfdXNlciA9IHVzZXIKCiMgcXVldWUgbWFzdGVyIGxvY2F0b3IKcXVldWVfbWFzdGVyX2xvY2F0b3IgPSBtaW4tbWFzdGVycwojIGVuYWJsZSBsb29wYmFjayB1c2VyCmxvb3BiYWNrX3VzZXJzLnVzZXIgPSBmYWxzZQojZGVmYXVsdF92aG9zdCA9IGRlZmF1bHQtdmhvc3QKI2Rpc2tfZnJlZV9saW1pdC5hYnNvbHV0ZSA9IDUwTUIKIyMgUHJvbWV0aGV1cyBtZXRyaWNzCiMjCnByb21ldGhldXMudGNwLnBvcnQgPSA5NDE5
---
# Source: opencti/charts/elasticsearch/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: opencti-elasticsearch
  namespace: "default"
  labels:
    app.kubernetes.io/instance: opencti
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: elasticsearch
    app.kubernetes.io/version: 8.15.0
    helm.sh/chart: elasticsearch-21.3.8
data:
  my_elasticsearch.yml: |-
    node:
      store:
        allow_mmap: false
---
# Source: opencti/charts/redis/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: opencti-redis-configuration
  namespace: "default"
  labels:
    app.kubernetes.io/instance: opencti
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: redis
    app.kubernetes.io/version: 7.2.5
    helm.sh/chart: redis-19.6.4
data:
  redis.conf: |-
    # User-supplied common configuration:
    # Enable AOF https://redis.io/topics/persistence#append-only-file
    appendonly yes
    # Disable RDB persistence, AOF persistence already enabled.
    save ""
    # End of common configuration
  master.conf: |-
    dir /data
    # User-supplied master configuration:
    rename-command FLUSHDB ""
    rename-command FLUSHALL ""
    # End of master configuration
  replica.conf: |-
    dir /data
    # User-supplied replica configuration:
    rename-command FLUSHDB ""
    rename-command FLUSHALL ""
    # End of replica configuration
---
# Source: opencti/charts/redis/templates/health-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: opencti-redis-health
  namespace: "default"
  labels:
    app.kubernetes.io/instance: opencti
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: redis
    app.kubernetes.io/version: 7.2.5
    helm.sh/chart: redis-19.6.4
data:
  ping_readiness_local.sh: |-
    #!/bin/bash

    [[ -f $REDIS_PASSWORD_FILE ]] && export REDIS_PASSWORD="$(< "${REDIS_PASSWORD_FILE}")"
    [[ -n "$REDIS_PASSWORD" ]] && export REDISCLI_AUTH="$REDIS_PASSWORD"
    response=$(
      timeout -s 15 $1 \
      redis-cli \
        -h localhost \
        -p $REDIS_PORT \
        ping
    )
    if [ "$?" -eq "124" ]; then
      echo "Timed out"
      exit 1
    fi
    if [ "$response" != "PONG" ]; then
      echo "$response"
      exit 1
    fi
  ping_liveness_local.sh: |-
    #!/bin/bash

    [[ -f $REDIS_PASSWORD_FILE ]] && export REDIS_PASSWORD="$(< "${REDIS_PASSWORD_FILE}")"
    [[ -n "$REDIS_PASSWORD" ]] && export REDISCLI_AUTH="$REDIS_PASSWORD"
    response=$(
      timeout -s 15 $1 \
      redis-cli \
        -h localhost \
        -p $REDIS_PORT \
        ping
    )
    if [ "$?" -eq "124" ]; then
      echo "Timed out"
      exit 1
    fi
    responseFirstWord=$(echo $response | head -n1 | awk '{print $1;}')
    if [ "$response" != "PONG" ] && [ "$responseFirstWord" != "LOADING" ] && [ "$responseFirstWord" != "MASTERDOWN" ]; then
      echo "$response"
      exit 1
    fi
  ping_readiness_master.sh: |-
    #!/bin/bash

    [[ -f $REDIS_MASTER_PASSWORD_FILE ]] && export REDIS_MASTER_PASSWORD="$(< "${REDIS_MASTER_PASSWORD_FILE}")"
    [[ -n "$REDIS_MASTER_PASSWORD" ]] && export REDISCLI_AUTH="$REDIS_MASTER_PASSWORD"
    response=$(
      timeout -s 15 $1 \
      redis-cli \
        -h $REDIS_MASTER_HOST \
        -p $REDIS_MASTER_PORT_NUMBER \
        ping
    )
    if [ "$?" -eq "124" ]; then
      echo "Timed out"
      exit 1
    fi
    if [ "$response" != "PONG" ]; then
      echo "$response"
      exit 1
    fi
  ping_liveness_master.sh: |-
    #!/bin/bash

    [[ -f $REDIS_MASTER_PASSWORD_FILE ]] && export REDIS_MASTER_PASSWORD="$(< "${REDIS_MASTER_PASSWORD_FILE}")"
    [[ -n "$REDIS_MASTER_PASSWORD" ]] && export REDISCLI_AUTH="$REDIS_MASTER_PASSWORD"
    response=$(
      timeout -s 15 $1 \
      redis-cli \
        -h $REDIS_MASTER_HOST \
        -p $REDIS_MASTER_PORT_NUMBER \
        ping
    )
    if [ "$?" -eq "124" ]; then
      echo "Timed out"
      exit 1
    fi
    responseFirstWord=$(echo $response | head -n1 | awk '{print $1;}')
    if [ "$response" != "PONG" ] && [ "$responseFirstWord" != "LOADING" ]; then
      echo "$response"
      exit 1
    fi
  ping_readiness_local_and_master.sh: |-
    script_dir="$(dirname "$0")"
    exit_status=0
    "$script_dir/ping_readiness_local.sh" $1 || exit_status=$?
    "$script_dir/ping_readiness_master.sh" $1 || exit_status=$?
    exit $exit_status
  ping_liveness_local_and_master.sh: |-
    script_dir="$(dirname "$0")"
    exit_status=0
    "$script_dir/ping_liveness_local.sh" $1 || exit_status=$?
    "$script_dir/ping_liveness_master.sh" $1 || exit_status=$?
    exit $exit_status
---
# Source: opencti/charts/redis/templates/scripts-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: opencti-redis-scripts
  namespace: "default"
  labels:
    app.kubernetes.io/instance: opencti
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: redis
    app.kubernetes.io/version: 7.2.5
    helm.sh/chart: redis-19.6.4
data:
  start-master.sh: |
    #!/bin/bash

    [[ -f $REDIS_PASSWORD_FILE ]] && export REDIS_PASSWORD="$(< "${REDIS_PASSWORD_FILE}")"
    if [[ -f /opt/bitnami/redis/mounted-etc/master.conf ]];then
        cp /opt/bitnami/redis/mounted-etc/master.conf /opt/bitnami/redis/etc/master.conf
    fi
    if [[ -f /opt/bitnami/redis/mounted-etc/redis.conf ]];then
        cp /opt/bitnami/redis/mounted-etc/redis.conf /opt/bitnami/redis/etc/redis.conf
    fi
    ARGS=("--port" "${REDIS_PORT}")
    ARGS+=("--protected-mode" "no")
    ARGS+=("--include" "/opt/bitnami/redis/etc/redis.conf")
    ARGS+=("--include" "/opt/bitnami/redis/etc/master.conf")
    exec redis-server "${ARGS[@]}"
---
# Source: opencti/charts/rabbitmq/templates/role.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: opencti-rabbitmq-endpoint-reader
  namespace: "default"
  labels:
    app.kubernetes.io/instance: opencti
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: rabbitmq
    app.kubernetes.io/version: 3.13.6
    helm.sh/chart: rabbitmq-14.6.6
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create"]
---
# Source: opencti/charts/rabbitmq/templates/rolebinding.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: opencti-rabbitmq-endpoint-reader
  namespace: "default"
  labels:
    app.kubernetes.io/instance: opencti
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: rabbitmq
    app.kubernetes.io/version: 3.13.6
    helm.sh/chart: rabbitmq-14.6.6
subjects:
  - kind: ServiceAccount
    name: opencti-rabbitmq
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: opencti-rabbitmq-endpoint-reader
---
# Source: opencti/charts/elasticsearch/templates/data/svc-headless.yaml
apiVersion: v1
kind: Service
metadata:
  name: opencti-elasticsearch-data-hl
  namespace: "default"
  labels:
    app.kubernetes.io/instance: opencti
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: elasticsearch
    app.kubernetes.io/version: 8.15.0
    helm.sh/chart: elasticsearch-21.3.8
    app.kubernetes.io/component: data
spec:
  type: ClusterIP
  clusterIP: None
  publishNotReadyAddresses: true
  ports:
    - name: tcp-rest-api
      port: 9200
      targetPort: rest-api
    - name: tcp-transport
      port: 9300
      targetPort: transport
  selector:
    app.kubernetes.io/instance: opencti
    app.kubernetes.io/name: elasticsearch
    app.kubernetes.io/component: data
---
# Source: opencti/charts/elasticsearch/templates/master/svc-headless.yaml
apiVersion: v1
kind: Service
metadata:
  name: opencti-elasticsearch-master-hl
  namespace: "default"
  labels:
    app.kubernetes.io/instance: opencti
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: elasticsearch
    app.kubernetes.io/version: 8.15.0
    helm.sh/chart: elasticsearch-21.3.8
    app.kubernetes.io/component: master
spec:
  type: ClusterIP
  clusterIP: None
  publishNotReadyAddresses: true
  ports:
    - name: tcp-rest-api
      port: 9200
      targetPort: rest-api
    - name: tcp-transport
      port: 9300
      targetPort: transport
  selector:
    app.kubernetes.io/instance: opencti
    app.kubernetes.io/name: elasticsearch
    app.kubernetes.io/component: master
---
# Source: opencti/charts/elasticsearch/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: opencti-elasticsearch
  namespace: "default"
  labels:
    app.kubernetes.io/instance: opencti
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: elasticsearch
    app.kubernetes.io/version: 8.15.0
    helm.sh/chart: elasticsearch-21.3.8
    app.kubernetes.io/component: master
spec:
  type: ClusterIP
  sessionAffinity: None
  ports:
    - name: tcp-rest-api
      port: 9200
      targetPort: rest-api
      nodePort: null
    - name: tcp-transport
      port: 9300
      nodePort: null
  selector:
    app.kubernetes.io/instance: opencti
    app.kubernetes.io/name: elasticsearch
    app.kubernetes.io/component: master
---
# Source: opencti/charts/rabbitmq/templates/svc-headless.yaml
apiVersion: v1
kind: Service
metadata:
  name: opencti-rabbitmq-headless
  namespace: "default"
  labels:
    app.kubernetes.io/instance: opencti
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: rabbitmq
    app.kubernetes.io/version: 3.13.6
    helm.sh/chart: rabbitmq-14.6.6
spec:
  clusterIP: None
  ports:
    - name: epmd
      port: 4369
      targetPort: epmd
    - name: amqp
      port: 5672
      targetPort: amqp
    - name: dist
      port: 25672
      targetPort: dist
    - name: http-stats
      port: 15672
      targetPort: stats
  selector:
    app.kubernetes.io/instance: opencti
    app.kubernetes.io/name: rabbitmq
  publishNotReadyAddresses: true
---
# Source: opencti/charts/rabbitmq/templates/svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: opencti-rabbitmq
  namespace: "default"
  labels:
    app.kubernetes.io/instance: opencti
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: rabbitmq
    app.kubernetes.io/version: 3.13.6
    helm.sh/chart: rabbitmq-14.6.6
spec:
  type: ClusterIP
  sessionAffinity: None
  ports:
    - name: amqp
      port: 5672
      targetPort: amqp
      nodePort: null
    - name: epmd
      port: 4369
      targetPort: epmd
      nodePort: null
    - name: dist
      port: 25672
      targetPort: dist
      nodePort: null
    - name: http-stats
      port: 15672
      targetPort: stats
      nodePort: null
  selector:
    app.kubernetes.io/instance: opencti
    app.kubernetes.io/name: rabbitmq
---
# Source: opencti/charts/redis/templates/headless-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: opencti-redis-headless
  namespace: "default"
  labels:
    app.kubernetes.io/instance: opencti
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: redis
    app.kubernetes.io/version: 7.2.5
    helm.sh/chart: redis-19.6.4
spec:
  type: ClusterIP
  clusterIP: None
  ports:
    - name: tcp-redis
      port: 6379
      targetPort: redis
  selector:
    app.kubernetes.io/instance: opencti
    app.kubernetes.io/name: redis
---
# Source: opencti/charts/redis/templates/master/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: opencti-redis-master
  namespace: "default"
  labels:
    app.kubernetes.io/instance: opencti
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: redis
    app.kubernetes.io/version: 7.2.5
    helm.sh/chart: redis-19.6.4
    app.kubernetes.io/component: master
spec:
  type: ClusterIP
  internalTrafficPolicy: Cluster
  sessionAffinity: None
  ports:
    - name: tcp-redis
      port: 6379
      targetPort: redis
      nodePort: null
  selector:
    app.kubernetes.io/instance: opencti
    app.kubernetes.io/name: redis
    app.kubernetes.io/component: master
---
# Source: opencti/templates/server/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: opencti-server
  labels:
    opencti.component: server
    helm.sh/chart: opencti-1.2.18
    app.kubernetes.io/name: opencti
    app.kubernetes.io/instance: opencti
    app.kubernetes.io/version: "6.2.15"
    app.kubernetes.io/managed-by: Helm
spec:
  type: ClusterIP
  ports:
    - port: 80
      targetPort: 4000
      protocol: TCP
      name: http
    - name: metrics
      port: 14269
      targetPort: 14269
      protocol: TCP
  selector:
    opencti.component: server
    app.kubernetes.io/name: opencti
    app.kubernetes.io/instance: opencti
---
# Source: opencti/templates/worker/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: opencti-worker
  labels:
    opencti.component: worker
    helm.sh/chart: opencti-1.2.18
    app.kubernetes.io/name: opencti
    app.kubernetes.io/instance: opencti
    app.kubernetes.io/version: "6.2.15"
    app.kubernetes.io/managed-by: Helm
spec:
  type: ClusterIP
  ports:
    - name: metrics
      port: 14269
      targetPort: 14269
      protocol: TCP
  selector:
    opencti.component: worker
    app.kubernetes.io/name: opencti
    app.kubernetes.io/instance: opencti
---
# Source: opencti/templates/connector/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: splunk-connector-opencti
  labels:
    opencti.connector: splunk
    helm.sh/chart: opencti-1.2.18
    app.kubernetes.io/name: opencti
    app.kubernetes.io/instance: opencti
    app.kubernetes.io/version: "6.2.15"
    app.kubernetes.io/managed-by: Helm
spec:
  replicas: 1
  selector:
    matchLabels:
      opencti.connector: splunk
      app.kubernetes.io/name: opencti
      app.kubernetes.io/instance: opencti
  template:
    metadata:
      annotations:
        vault.security.banzaicloud.io/vault-addr: https://vault.test.env:8200
        vault.security.banzaicloud.io/vault-path: kubernetes_test2-rke-test-env
        vault.security.banzaicloud.io/vault-role: opencti
        vault.security.banzaicloud.io/vault-skip-verify: "true"
      labels:
        opencti.connector: splunk
        app.kubernetes.io/name: opencti
        app.kubernetes.io/instance: opencti
    spec:
      securityContext:
        null
      containers:
        - name: splunk-connector
          securityContext:
            null
          image: "opencti/connector-splunk:6.2.15"
          imagePullPolicy: IfNotPresent
          env:
          # Variables from secrets have precedence

          # Special handling for OPENCTI_URL which is constructed from other values
          - name: OPENCTI_URL
            value: "http://opencti-server:80"

          # Special handling for OPENCTI_TOKEN which is constructed from other values
          - name: OPENCTI_TOKEN
            value: "vault:test/data/soc/opencti#APP__ADMIN__TOKEN"

          # Add Variables in plain text if they were not already added from secrets
          - name: CONNECTOR_CONFIDENCE_LEVEL
            value: "2"
          - name: CONNECTOR_ID
            value: "11111111-1111-1111-1111-111111111"
          - name: CONNECTOR_LIVE_STREAM_ID
            value: "live"
          - name: CONNECTOR_LIVE_STREAM_LISTEN_DELETE
            value: "true"
          - name: CONNECTOR_LIVE_STREAM_NO_DEPENDENCIES
            value: "true"
          - name: CONNECTOR_LOG_LEVEL
            value: "error"
          - name: CONNECTOR_NAME
            value: "OpenCTI Splunk Connector"
          - name: CONNECTOR_SCOPE
            value: "splunk"
          - name: CONNECTOR_TYPE
            value: "STREAM"
          - name: SPLUNK_APP
            value: "search"
          - name: SPLUNK_IGNORE_TYPES
            value: "attack-pattern,campaign,course-of-action,data-component,data-source,external-reference,identity,intrusion-set,kill-chain-phase,label,location,malware,marking-definition,relationship,threat-actor,tool,vocabulary,vulnerability"
          - name: SPLUNK_KV_STORE_NAME
            value: "opencti"
          - name: SPLUNK_OWNER
            value: "nobody"
          - name: SPLUNK_SSL_VERIFY
            value: "true"
          - name: SPLUNK_TOKEN
            value: "vault:test/data/soc/opencti#SPLUNK_TOKEN"
          - name: SPLUNK_URL
            value: "https://siem-api.test.io"

          resources:
            null
---
# Source: opencti/templates/server/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: opencti-server
  labels:
    opencti.component: server
    helm.sh/chart: opencti-1.2.18
    app.kubernetes.io/name: opencti
    app.kubernetes.io/instance: opencti
    app.kubernetes.io/version: "6.2.15"
    app.kubernetes.io/managed-by: Helm
spec:
  replicas: 1
  selector:
    matchLabels:
      opencti.component: server
      app.kubernetes.io/name: opencti
      app.kubernetes.io/instance: opencti
  template:
    metadata:
      annotations:
        vault.security.banzaicloud.io/vault-addr: https://vault.test.env:8200
        vault.security.banzaicloud.io/vault-path: kubernetes_test2-rke-test-env
        vault.security.banzaicloud.io/vault-role: opencti
        vault.security.banzaicloud.io/vault-skip-verify: "true"
      labels:
        opencti.component: server
        app.kubernetes.io/name: opencti
        app.kubernetes.io/instance: opencti
    spec:
      serviceAccountName: opencti
      securityContext:
        null
      initContainers:
      - name: ready-checker-elasticsearch
        image: busybox
        command:
          - 'sh'
          - '-c'
          - 'RETRY=0; until [ $RETRY -eq 30 ]; do nc -zv opencti-elasticsearch 9200 && break; echo "[$RETRY/30] waiting service opencti-elasticsearch:9200 is ready"; sleep 5; RETRY=$(($RETRY + 1)); done'
      - name: ready-checker-rabbitmq
        image: busybox
        command:
          - 'sh'
          - '-c'
          - 'RETRY=0; until [ $RETRY -eq 30 ]; do nc -zv opencti-rabbitmq 5672 && break; echo "[$RETRY/30] waiting service opencti-rabbitmq:5672 is ready"; sleep 5; RETRY=$(($RETRY + 1)); done'
      - name: ready-checker-redis-master
        image: busybox
        command:
          - 'sh'
          - '-c'
          - 'RETRY=0; until [ $RETRY -eq 30 ]; do nc -zv opencti-redis-master 6379 && break; echo "[$RETRY/30] waiting service opencti-redis-master:6379 is ready"; sleep 5; RETRY=$(($RETRY + 1)); done'
      containers:
        - name: opencti-server
          securityContext:
            null
          image: "opencti/platform:6.2.15"
          imagePullPolicy: IfNotPresent
          ports:
            - name: http
              containerPort: 4000
              protocol: TCP
          livenessProbe:
            httpGet:
              path: "/health?health_access_key=11111111-1111-1111-1111-111111111"
              port: 4000
            failureThreshold: 3
            initialDelaySeconds: 180
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 5
          readinessProbe:
            httpGet:
              path: "/health?health_access_key=11111111-1111-1111-1111-111111111"
              port: 4000
            failureThreshold: 3
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          startupProbe:
            httpGet:
              path: "/health?health_access_key=11111111-1111-1111-1111-111111111"
              port: 4000
            failureThreshold: 30
            initialDelaySeconds: 180
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 5
          env:
          - name: NODE_OPTIONS
            value: --max-old-space-size=8096
          - name: PROVIDERS__LOCAL__STRATEGY
            value: LocalStrategy

          # Variables from secrets have precedence

          # Add Variables in plain text if they were not already added from secrets
          - name: APP__ADMIN__EMAIL
            value: "vault:test/data/soc/opencti#APP__ADMIN__EMAIL"
          - name: APP__ADMIN__PASSWORD
            value: "vault:test/data/soc/opencti#APP__ADMIN__PASSWORD"
          - name: APP__ADMIN__TOKEN
            value: "vault:test/data/soc/opencti#APP__ADMIN__TOKEN"
          - name: APP__APP_LOGS__LOGS_LEVEL
            value: "error"
          - name: APP__BASE_PATH
            value: "/"
          - name: APP__GRAPHQL__PLAYGROUND__ENABLED
            value: "false"
          - name: APP__GRAPHQL__PLAYGROUND__FORCE_DISABLED_INTROSPECTION
            value: "false"
          - name: APP__HEALTH_ACCESS_KEY
            value: "11111111-1111-1111-1111-111111111"
          - name: APP__TELEMETRY__METRICS__ENABLED
            value: "true"
          - name: ELASTICSEARCH__ENGINE_SELECTOR
            value: "elk"
          - name: ELASTICSEARCH__URL
            value: "http://opencti-elasticsearch:9200"
          - name: MINIO__ACCESS_KEY
            value: "vault:test/data/soc/opencti#MINIO__ACCESS_KEY"
          - name: MINIO__BUCKET_NAME
            value: "opencti-test"
          - name: MINIO__BUCKET_REGION
            value: "ams-1"
          - name: MINIO__ENDPOINT
            value: "ny-minio-04.test.env"
          - name: MINIO__PORT
            value: "443"
          - name: MINIO__SECRET_KEY
            value: "vault:test/data/soc/opencti#MINIO__SECRET_KEY"
          - name: MINIO__USE_SSL
            value: "true"
          - name: NODE_TLS_REJECT_UNAUTHORIZED
            value: "0"
          - name: PROVIDERS__OPENID__CONFIG__CLIENT_ID
            value: "vault:test/data/soc/opencti#PROVIDERS__OPENID__CONFIG__CLIENT_ID"
          - name: PROVIDERS__OPENID__CONFIG__CLIENT_SECRET
            value: "vault:test/data/soc/opencti#PROVIDERS__OPENID__CONFIG__CLIENT_SECRET"
          - name: PROVIDERS__OPENID__CONFIG__GROUPS_MANAGEMENT__GROUPS_SCOPE
            value: "groups"
          - name: PROVIDERS__OPENID__CONFIG__ISSUER
            value: "https://test.okta.com"
          - name: PROVIDERS__OPENID__CONFIG__LABEL
            value: "Login with Okta"
          - name: PROVIDERS__OPENID__CONFIG__LOGOUT_REMOTE
            value: "false"
          - name: PROVIDERS__OPENID__CONFIG__REDIRECT_URIS
            value: "[\"https://opencti.test.env/auth/oic/callback\"]"
          - name: PROVIDERS__OPENID__STRATEGY
            value: "OpenIDConnectStrategy"
          - name: RABBITMQ__HOSTNAME
            value: "opencti-rabbitmq"
          - name: RABBITMQ__PASSWORD
            value: "vault:test/data/soc/opencti#RABBITMQ__PASSWORD"
          - name: RABBITMQ__PORT
            value: "5672"
          - name: RABBITMQ__PORT_MANAGEMENT
            value: "15672"
          - name: RABBITMQ__USERNAME
            value: "user"
          - name: REDIS__HOSTNAME
            value: "opencti-redis-master"
          - name: REDIS__MODE
            value: "single"
          - name: REDIS__PORT
            value: "6379"

          resources:
            {}
---
# Source: opencti/templates/worker/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: opencti-worker
  labels:
    opencti.component: worker
    helm.sh/chart: opencti-1.2.18
    app.kubernetes.io/name: opencti
    app.kubernetes.io/instance: opencti
    app.kubernetes.io/version: "6.2.15"
    app.kubernetes.io/managed-by: Helm
spec:
  replicas: 1
  selector:
    matchLabels:
      opencti.component: worker
      app.kubernetes.io/name: opencti
      app.kubernetes.io/instance: opencti
  template:
    metadata:
      annotations:
        vault.security.banzaicloud.io/vault-addr: https://vault.test.env:8200
        vault.security.banzaicloud.io/vault-path: kubernetes_test2-rke-test-env
        vault.security.banzaicloud.io/vault-role: soc-opencti
        vault.security.banzaicloud.io/vault-skip-verify: "true"
      labels:
        opencti.component: worker
        app.kubernetes.io/name: opencti
        app.kubernetes.io/instance: opencti
    spec:
      serviceAccountName: opencti
      securityContext:
        null
      initContainers:
      - name: ready-checker-server
        image: busybox
        command:
          - 'sh'
          - '-c'
          - 'RETRY=0; until [ $RETRY -eq 30 ]; do nc -zv opencti-server 80 && break; echo "[$RETRY/30] waiting service opencti-server:80 is ready"; sleep 5; RETRY=$(($RETRY + 1)); done'
      containers:
        - name: opencti-worker
          securityContext:
            null
          image: "opencti/worker:6.2.15"
          imagePullPolicy: IfNotPresent
          ports:
            - name: metrics
              containerPort: 14269
              protocol: TCP
          env:
          # Variables from secrets have precedence

          # Special handling for OPENCTI_URL which is constructed from other values
          - name: OPENCTI_URL
            value: "http://opencti-server:80"

          # Special handling for OPENCTI_TOKEN which is constructed from other values
          - name: OPENCTI_TOKEN
            value: "vault:test/data/soc/opencti#APP__ADMIN__TOKEN"

          # Add Variables in plain text from .Values.worker.env if they were not already added from secrets
          - name: WORKER_LOG_LEVEL
            value: "info"
          - name: WORKER_TELEMETRY_ENABLED
            value: "true"

          resources:
            {}
---
# Source: opencti/charts/elasticsearch/templates/data/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: opencti-elasticsearch-data
  namespace: "default"
  labels:
    app.kubernetes.io/instance: opencti
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: elasticsearch
    app.kubernetes.io/version: 8.15.0
    helm.sh/chart: elasticsearch-21.3.8
    app.kubernetes.io/component: data
    ## Istio Labels: https://istio.io/docs/ops/deployment/requirements/
    app: data
spec:
  replicas: 1
  podManagementPolicy: Parallel
  selector:
    matchLabels:
      app.kubernetes.io/instance: opencti
      app.kubernetes.io/name: elasticsearch
      app.kubernetes.io/component: data
  serviceName: opencti-elasticsearch-data-hl
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        app.kubernetes.io/instance: opencti
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/name: elasticsearch
        app.kubernetes.io/version: 8.15.0
        helm.sh/chart: elasticsearch-21.3.8
        app.kubernetes.io/component: data
        ## Istio Labels: https://istio.io/docs/ops/deployment/requirements/
        app: data
      annotations:
    spec:
      serviceAccountName: opencti-elasticsearch-data
      
      automountServiceAccountToken: false
      affinity:
        podAffinity:
          
        podAntiAffinity:
          
        nodeAffinity:
          
      securityContext:
        fsGroup: 1001
        fsGroupChangePolicy: Always
        supplementalGroups: []
        sysctls: []
      containers:
        - name: elasticsearch
          image: docker.io/bitnami/elasticsearch:8.15.0-debian-12-r1
          imagePullPolicy: "IfNotPresent"
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              drop:
              - ALL
            privileged: false
            readOnlyRootFilesystem: true
            runAsGroup: 1001
            runAsNonRoot: true
            runAsUser: 1001
            seLinuxOptions: {}
            seccompProfile:
              type: RuntimeDefault
          env:
            - name: BITNAMI_DEBUG
              value: "false"
            - name: MY_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: ELASTICSEARCH_IS_DEDICATED_NODE
              value: "yes"
            - name: ELASTICSEARCH_NODE_ROLES
              value: "data"
            - name: ELASTICSEARCH_TRANSPORT_PORT_NUMBER
              value: "9300"
            - name: ELASTICSEARCH_HTTP_PORT_NUMBER
              value: "9200"
            - name: ELASTICSEARCH_CLUSTER_NAME
              value: "elastic"
            - name: ELASTICSEARCH_CLUSTER_HOSTS
              value: "opencti-elasticsearch-master-hl.default.svc.cluster.local,opencti-elasticsearch-data-hl.default.svc.cluster.local,"
            - name: ELASTICSEARCH_TOTAL_NODES
              value: "2"
            - name: ELASTICSEARCH_CLUSTER_MASTER_HOSTS
              value: opencti-elasticsearch-master-0 
            - name: ELASTICSEARCH_MINIMUM_MASTER_NODES
              value: "1"
            - name: ELASTICSEARCH_ADVERTISED_HOSTNAME
              value: "$(MY_POD_NAME).opencti-elasticsearch-data-hl.default.svc.cluster.local"
            - name: ELASTICSEARCH_HEAP_SIZE
              value: "512m"
            - name: ES_JAVA_OPTS
              value: -Xms512M -Xmx512M
          ports:
            - name: rest-api
              containerPort: 9200
            - name: transport
              containerPort: 9300
          livenessProbe:
            failureThreshold: 5
            initialDelaySeconds: 180
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 5
            tcpSocket:
              port: rest-api
          readinessProbe:
            failureThreshold: 5
            initialDelaySeconds: 90
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 5
            exec:
              command:
                - /opt/bitnami/scripts/elasticsearch/healthcheck.sh
          resources:
            limits:
              cpu: 1
              memory: 2Gi
            requests:
              cpu: 500m
              memory: 1Gi
          volumeMounts:
            - name: empty-dir
              mountPath: /tmp
              subPath: tmp-dir
            - name: empty-dir
              mountPath: /opt/bitnami/elasticsearch/config
              subPath: app-conf-dir
            - name: empty-dir
              mountPath: /opt/bitnami/elasticsearch/tmp
              subPath: app-tmp-dir
            - name: empty-dir
              mountPath: /opt/bitnami/elasticsearch/logs
              subPath: app-logs-dir
            - name: empty-dir
              mountPath: /opt/bitnami/elasticsearch/plugins
              subPath: app-plugins-dir
            - name: data
              mountPath: /bitnami/elasticsearch/data
            - mountPath: /opt/bitnami/elasticsearch/config/my_elasticsearch.yml
              name: config
              subPath: my_elasticsearch.yml
      volumes:
        - name: empty-dir
          emptyDir: {}
        - name: config
          configMap:
            name: opencti-elasticsearch
  volumeClaimTemplates:
    - metadata:
        name: "data"
      spec:
        accessModes:
          - "ReadWriteOnce"
        resources:
          requests:
            storage: "32Gi"
---
# Source: opencti/charts/elasticsearch/templates/master/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: opencti-elasticsearch-master
  namespace: "default"
  labels:
    app.kubernetes.io/instance: opencti
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: elasticsearch
    app.kubernetes.io/version: 8.15.0
    helm.sh/chart: elasticsearch-21.3.8
    app.kubernetes.io/component: master
    ## Istio Labels: https://istio.io/docs/ops/deployment/requirements/
    app: master
spec:
  replicas: 1
  podManagementPolicy: Parallel
  selector:
    matchLabels:
      app.kubernetes.io/instance: opencti
      app.kubernetes.io/name: elasticsearch
      app.kubernetes.io/component: master
  serviceName: opencti-elasticsearch-master-hl
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        app.kubernetes.io/instance: opencti
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/name: elasticsearch
        app.kubernetes.io/version: 8.15.0
        helm.sh/chart: elasticsearch-21.3.8
        app.kubernetes.io/component: master
        ## Istio Labels: https://istio.io/docs/ops/deployment/requirements/
        app: master
      annotations:
    spec:
      serviceAccountName: opencti-elasticsearch-master
      
      automountServiceAccountToken: false
      affinity:
        podAffinity:
          
        podAntiAffinity:
          
        nodeAffinity:
          
      securityContext:
        fsGroup: 1001
        fsGroupChangePolicy: Always
        supplementalGroups: []
        sysctls: []
      containers:
        - name: elasticsearch
          image: docker.io/bitnami/elasticsearch:8.15.0-debian-12-r1
          imagePullPolicy: "IfNotPresent"
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              drop:
              - ALL
            privileged: false
            readOnlyRootFilesystem: true
            runAsGroup: 1001
            runAsNonRoot: true
            runAsUser: 1001
            seLinuxOptions: {}
            seccompProfile:
              type: RuntimeDefault
          env:
            - name: BITNAMI_DEBUG
              value: "false"
            - name: MY_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: ELASTICSEARCH_IS_DEDICATED_NODE
              value: "yes"
            - name: ELASTICSEARCH_NODE_ROLES
              value: "master"
            - name: ELASTICSEARCH_TRANSPORT_PORT_NUMBER
              value: "9300"
            - name: ELASTICSEARCH_HTTP_PORT_NUMBER
              value: "9200"
            - name: ELASTICSEARCH_CLUSTER_NAME
              value: "elastic"
            
            - name: ELASTICSEARCH_CLUSTER_HOSTS
              value: "opencti-elasticsearch-master-hl.default.svc.cluster.local,opencti-elasticsearch-data-hl.default.svc.cluster.local,"
            - name: ELASTICSEARCH_TOTAL_NODES
              value: "2"
            - name: ELASTICSEARCH_CLUSTER_MASTER_HOSTS
              value: opencti-elasticsearch-master-0 
            - name: ELASTICSEARCH_MINIMUM_MASTER_NODES
              value: "1"
            - name: ELASTICSEARCH_ADVERTISED_HOSTNAME
              value: "$(MY_POD_NAME).opencti-elasticsearch-master-hl.default.svc.cluster.local"
            - name: ELASTICSEARCH_HEAP_SIZE
              value: "128m"
            - name: ES_JAVA_OPTS
              value: -Xms512M -Xmx512M
          ports:
            - name: rest-api
              containerPort: 9200
            - name: transport
              containerPort: 9300
          livenessProbe:
            failureThreshold: 5
            initialDelaySeconds: 180
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 5
            tcpSocket:
              port: rest-api
          readinessProbe:
            failureThreshold: 5
            initialDelaySeconds: 90
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 5
            exec:
              command:
                - /opt/bitnami/scripts/elasticsearch/healthcheck.sh
          resources:
            limits:
              cpu: 1
              memory: 2Gi
            requests:
              cpu: 500m
              memory: 1Gi
          volumeMounts:
            - name: empty-dir
              mountPath: /tmp
              subPath: tmp-dir
            - name: empty-dir
              mountPath: /opt/bitnami/elasticsearch/config
              subPath: app-conf-dir
            - name: empty-dir
              mountPath: /opt/bitnami/elasticsearch/tmp
              subPath: app-tmp-dir
            - name: empty-dir
              mountPath: /opt/bitnami/elasticsearch/logs
              subPath: app-logs-dir
            - name: empty-dir
              mountPath: /opt/bitnami/elasticsearch/plugins
              subPath: app-plugins-dir
            - name: data
              mountPath: /bitnami/elasticsearch/data
            - mountPath: /opt/bitnami/elasticsearch/config/my_elasticsearch.yml
              name: config
              subPath: my_elasticsearch.yml
      volumes:
        - name: empty-dir
          emptyDir: {}
        - name: config
          configMap:
            name: opencti-elasticsearch
  volumeClaimTemplates:
    - metadata:
        name: "data"
      spec:
        accessModes:
          - "ReadWriteOnce"
        resources:
          requests:
            storage: "16Gi"
---
# Source: opencti/charts/rabbitmq/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: opencti-rabbitmq
  namespace: "default"
  labels:
    app.kubernetes.io/instance: opencti
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: rabbitmq
    app.kubernetes.io/version: 3.13.6
    helm.sh/chart: rabbitmq-14.6.6
spec:
  serviceName: opencti-rabbitmq-headless
  podManagementPolicy: OrderedReady
  replicas: 1
  updateStrategy:
    type: RollingUpdate
  selector:
    matchLabels:
      app.kubernetes.io/instance: opencti
      app.kubernetes.io/name: rabbitmq
  template:
    metadata:
      labels:
        app.kubernetes.io/instance: opencti
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/name: rabbitmq
        app.kubernetes.io/version: 3.13.6
        helm.sh/chart: rabbitmq-14.6.6
      annotations:
        checksum/config: 731a92a18bfa6c078eb3192d6f1c281649badc50d9cb08f3293d8ac8048f8fdf
    spec:
      
      serviceAccountName: opencti-rabbitmq
      affinity:
        podAffinity:
          
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - podAffinityTerm:
                labelSelector:
                  matchLabels:
                    app.kubernetes.io/instance: opencti
                    app.kubernetes.io/name: rabbitmq
                topologyKey: kubernetes.io/hostname
              weight: 1
        nodeAffinity:
          
      automountServiceAccountToken: true
      securityContext:
        fsGroup: 1001
        fsGroupChangePolicy: Always
        supplementalGroups: []
        sysctls: []
      terminationGracePeriodSeconds: 120
      enableServiceLinks: true
      initContainers:
        - name: prepare-plugins-dir
          image: docker.io/bitnami/rabbitmq:3.13.6-debian-12-r1
          imagePullPolicy: "IfNotPresent"
          resources:
            limits:
              cpu: 1
              memory: 2Gi
            requests:
              cpu: 500m
              memory: 1Gi
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              drop:
              - ALL
            readOnlyRootFilesystem: true
            runAsGroup: 1001
            runAsNonRoot: true
            runAsUser: 1001
            seccompProfile:
              type: RuntimeDefault
          command:
            - /bin/bash
          args:
            - -ec
            - |
              #!/bin/bash

              . /opt/bitnami/scripts/liblog.sh

              info "Copying plugins dir to empty dir"
              # In order to not break the possibility of installing custom plugins, we need
              # to make the plugins directory writable, so we need to copy it to an empty dir volume
              cp -r --preserve=mode /opt/bitnami/rabbitmq/plugins/ /emptydir/app-plugins-dir
          volumeMounts:
            - name: empty-dir
              mountPath: /emptydir
      containers:
        - name: rabbitmq
          image: docker.io/bitnami/rabbitmq:3.13.6-debian-12-r1
          imagePullPolicy: "IfNotPresent"
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              drop:
              - ALL
            readOnlyRootFilesystem: true
            runAsGroup: 1001
            runAsNonRoot: true
            runAsUser: 1001
            seccompProfile:
              type: RuntimeDefault
          lifecycle:
            preStop:
              exec:
                command:
                  - /bin/bash
                  - -ec
                  - |
                    if [[ -f /opt/bitnami/scripts/rabbitmq/nodeshutdown.sh ]]; then
                        /opt/bitnami/scripts/rabbitmq/nodeshutdown.sh -t "120" -d "false"
                    else
                        rabbitmqctl stop_app
                    fi
          env:
            - name: BITNAMI_DEBUG
              value: "false"
            - name: MY_POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: MY_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: MY_POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: RABBITMQ_FORCE_BOOT
              value: "no"
            - name: RABBITMQ_NODE_NAME
              value: "rabbit@$(MY_POD_NAME).opencti-rabbitmq-headless.$(MY_POD_NAMESPACE).svc.cluster.local"
            - name: RABBITMQ_MNESIA_DIR
              value: "/opt/bitnami/rabbitmq/.rabbitmq/mnesia/$(RABBITMQ_NODE_NAME)"
            - name: RABBITMQ_LDAP_ENABLE
              value: "no"
            - name: RABBITMQ_LOGS
              value: "-"
            - name: RABBITMQ_ULIMIT_NOFILES
              value: "65535"
            - name: RABBITMQ_USE_LONGNAME
              value: "true"
            - name: RABBITMQ_ERL_COOKIE
              valueFrom:
                secretKeyRef:
                  name: opencti-rabbitmq-secret
                  key: rabbitmq-erlang-cookie
            - name: RABBITMQ_LOAD_DEFINITIONS
              value: "no"
            - name: RABBITMQ_DEFINITIONS_FILE
              value: "/app/load_definition.json"
            - name: RABBITMQ_SECURE_PASSWORD
              value: "yes"
            - name: RABBITMQ_USERNAME
              value: "user"
            - name: RABBITMQ_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: opencti-rabbitmq-secret
                  key: rabbitmq-password
            - name: RABBITMQ_PLUGINS
              value: "rabbitmq_management, rabbitmq_peer_discovery_k8s, rabbitmq_auth_backend_ldap"
          envFrom:
          ports:
            - name: amqp
              containerPort: 5672
            - name: dist
              containerPort: 25672
            - name: stats
              containerPort: 15672
            - name: epmd
              containerPort: 4369
            - name: metrics
              containerPort: 9419
          livenessProbe:
            failureThreshold: 6
            initialDelaySeconds: 120
            periodSeconds: 30
            successThreshold: 1
            timeoutSeconds: 20
            exec:
              command:
                - sh
                - -ec
                - curl -f --user user:$RABBITMQ_PASSWORD 127.0.0.1:15672/api/health/checks/virtual-hosts
          readinessProbe:
            failureThreshold: 3
            initialDelaySeconds: 10
            periodSeconds: 30
            successThreshold: 1
            timeoutSeconds: 20
            exec:
              command:
                - sh
                - -ec
                - curl -f --user user:$RABBITMQ_PASSWORD 127.0.0.1:15672/api/health/checks/local-alarms
          resources:
            limits:
              cpu: 1
              memory: 2Gi
            requests:
              cpu: 500m
              memory: 1Gi
          volumeMounts:
            - name: configuration
              mountPath: /bitnami/rabbitmq/conf
            - name: empty-dir
              mountPath: /tmp
              subPath: tmp-dir
            - name: empty-dir
              mountPath: /opt/bitnami/rabbitmq/etc/rabbitmq
              subPath: app-conf-dir
            - name: empty-dir
              mountPath: /opt/bitnami/rabbitmq/var/lib/rabbitmq
              subPath: app-tmp-dir
            - name: empty-dir
              mountPath: /opt/bitnami/rabbitmq/.rabbitmq/
              subPath: app-erlang-cookie
            - name: empty-dir
              mountPath: /opt/bitnami/rabbitmq/var/log/rabbitmq
              subPath: app-logs-dir
            - name: empty-dir
              mountPath: /opt/bitnami/rabbitmq/plugins
              subPath: app-plugins-dir
            - name: data
              mountPath: /opt/bitnami/rabbitmq/.rabbitmq/mnesia
      volumes:
        - name: empty-dir
          emptyDir: {}
        - name: configuration
          projected:
            sources:
              - secret:
                  name: opencti-rabbitmq-config
  volumeClaimTemplates:
    - apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: data
        labels:
          app.kubernetes.io/instance: opencti
          app.kubernetes.io/name: rabbitmq
      spec:
        accessModes:
            - "ReadWriteOnce"
        resources:
          requests:
            storage: "8Gi"
---
# Source: opencti/charts/redis/templates/master/application.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: opencti-redis-master
  namespace: "default"
  labels:
    app.kubernetes.io/instance: opencti
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: redis
    app.kubernetes.io/version: 7.2.5
    helm.sh/chart: redis-19.6.4
    app.kubernetes.io/component: master
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/instance: opencti
      app.kubernetes.io/name: redis
      app.kubernetes.io/component: master
  serviceName: opencti-redis-headless
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        app.kubernetes.io/instance: opencti
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/name: redis
        app.kubernetes.io/version: 7.2.5
        helm.sh/chart: redis-19.6.4
        app.kubernetes.io/component: master
      annotations:
        checksum/configmap: 86bcc953bb473748a3d3dc60b7c11f34e60c93519234d4c37f42e22ada559d47
        checksum/health: aff24913d801436ea469d8d374b2ddb3ec4c43ee7ab24663d5f8ff1a1b6991a9
        checksum/scripts: 43cdf68c28f3abe25ce017a82f74dbf2437d1900fd69df51a55a3edf6193d141
        checksum/secret: 44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a
    spec:
      
      securityContext:
        fsGroup: 1001
        fsGroupChangePolicy: Always
        supplementalGroups: []
        sysctls: []
      serviceAccountName: opencti-redis-master
      automountServiceAccountToken: false
      affinity:
        podAffinity:
          
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - podAffinityTerm:
                labelSelector:
                  matchLabels:
                    app.kubernetes.io/instance: opencti
                    app.kubernetes.io/name: redis
                    app.kubernetes.io/component: master
                topologyKey: kubernetes.io/hostname
              weight: 1
        nodeAffinity:
          
      enableServiceLinks: true
      terminationGracePeriodSeconds: 30
      containers:
        - name: redis
          image: docker.io/bitnami/redis:7.2.5-debian-12-r4
          imagePullPolicy: "IfNotPresent"
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              drop:
              - ALL
            readOnlyRootFilesystem: true
            runAsGroup: 1001
            runAsNonRoot: true
            runAsUser: 1001
            seLinuxOptions: {}
            seccompProfile:
              type: RuntimeDefault
          command:
            - /bin/bash
          args:
            - -c
            - /opt/bitnami/scripts/start-scripts/start-master.sh
          env:
            - name: BITNAMI_DEBUG
              value: "false"
            - name: REDIS_REPLICATION_MODE
              value: master
            - name: ALLOW_EMPTY_PASSWORD
              value: "yes"
            - name: REDIS_TLS_ENABLED
              value: "no"
            - name: REDIS_PORT
              value: "6379"
          ports:
            - name: redis
              containerPort: 6379
          livenessProbe:
            initialDelaySeconds: 20
            periodSeconds: 5
            # One second longer than command timeout should prevent generation of zombie processes.
            timeoutSeconds: 6
            successThreshold: 1
            failureThreshold: 5
            exec:
              command:
                - sh
                - -c
                - /health/ping_liveness_local.sh 5
          readinessProbe:
            initialDelaySeconds: 20
            periodSeconds: 5
            timeoutSeconds: 2
            successThreshold: 1
            failureThreshold: 5
            exec:
              command:
                - sh
                - -c
                - /health/ping_readiness_local.sh 1
          resources:
            limits:
              cpu: 1
              memory: 1Gi
            requests:
              cpu: 500m
              memory: 1Gi
          volumeMounts:
            - name: start-scripts
              mountPath: /opt/bitnami/scripts/start-scripts
            - name: health
              mountPath: /health
            - name: redis-data
              mountPath: /data
            - name: config
              mountPath: /opt/bitnami/redis/mounted-etc
            - name: empty-dir
              mountPath: /opt/bitnami/redis/etc/
              subPath: app-conf-dir
            - name: empty-dir
              mountPath: /tmp
              subPath: tmp-dir
      volumes:
        - name: start-scripts
          configMap:
            name: opencti-redis-scripts
            defaultMode: 0755
        - name: health
          configMap:
            name: opencti-redis-health
            defaultMode: 0755
        - name: config
          configMap:
            name: opencti-redis-configuration
        - name: empty-dir
          emptyDir: {}
  volumeClaimTemplates:
    - apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: redis-data
        labels:
          app.kubernetes.io/instance: opencti
          app.kubernetes.io/name: redis
          app.kubernetes.io/component: master
      spec:
        accessModes:
          - "ReadWriteOnce"
        resources:
          requests:
            storage: "32Gi"
---
# Source: opencti/templates/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: opencti
  labels:
    helm.sh/chart: opencti-1.2.18
    app.kubernetes.io/name: opencti
    app.kubernetes.io/instance: opencti
    app.kubernetes.io/version: "6.2.15"
    app.kubernetes.io/managed-by: Helm
spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - "opencti.test.env"
      secretName: ingress-test-tls
  rules:
    - host: "opencti.test.env"
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: opencti-server
                port:
                  number: 80

Command:

helm search repo opencti
NAME            CHART VERSION   APP VERSION     DESCRIPTION                                       
opencti/opencti 1.2.18          6.2.15          A Helm chart to deploy Open Cyber Threat Intell...

# replace content ci-values with your valueshelm template opencti opencti/opencti -f ci/ci-values.yaml

Context:

  • helm version: version.BuildInfo{Version:"v3.15.3", GitCommit:"3bb50bbbdd9c946ba9989fbe4fb4104766302a64", GitTreeState:"clean", GoVersion:"go1.22.5"}

@r4zr1
Copy link

r4zr1 commented Aug 27, 2024

@ialejandro
I've tested v1.3.0 with #40 and everything works perfect now, many thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants
@r4zr1 @ialejandro @amartingarcia and others