Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kafka unstable #1458

Open
1 task done
lbcd opened this issue Sep 15, 2024 · 32 comments
Open
1 task done

kafka unstable #1458

lbcd opened this issue Sep 15, 2024 · 32 comments

Comments

@lbcd
Copy link

lbcd commented Sep 15, 2024

Issue submitter TODO list

  • I've searched for an already existing issues here

Describe the bug (actual behavior)

After an installation, it works during 7 or 8 days and after it starts to not have new event in sentry. Sentry website is accessible but there is no new event.
When I go to see containers status I see that some of them restart in loop :

  • sentry-kafka-controller-2
  • sentry-post-process-forward-errors-7bbccb6767-bglng
  • sentry-post-process-forward-issue-platform-795ccb6fd4-kbpqj
  • sentry-snuba-subscription-consumer-metrics-77bcd76b6f-wz85z
  • sentry-subscription-consumer-events-8579f7b94f-knw7j
  • sentry-subscription-consumer-transactions-66b468867-xhpk5

Each of them get this error :
cimpl.KafkaException: KafkaError{code=_UNKNOWN_PARTITION,val=-190,str="Failed to get watermark offsets: Local: Unknown partition"}

en here is last log of sentry-kafka-controller-2 :
[2024-09-15 06:06:51,096] INFO [LogLoader partition=ingest-events-0, dir=/bitnami/kafka/data] Recovering unflushed segment 0. 0/2 recovered for ingest-events-0. (kafka.log.LogLoader)
[2024-09-15 06:06:51,096] INFO [LogLoader partition=ingest-events-0, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$)
[2024-09-15 06:06:51,096] INFO [LogLoader partition=ingest-events-0, dir=/bitnami/kafka/data] Reloading from producer snapshot and rebuilding producer state from offset 0 (kafka.log.UnifiedLog$)
[2024-09-15 06:06:51,096] INFO [LogLoader partition=ingest-events-0, dir=/bitnami/kafka/data] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 0 (kafka.log.UnifiedLog$)
[2024-09-15 06:06:53,012] INFO [BrokerLifecycleManager id=2] The broker is in RECOVERY. (kafka.server.BrokerLifecycleManager)
[2024-09-15 06:06:55,015] INFO [BrokerLifecycleManager id=2] The broker is in RECOVERY. (kafka.server.BrokerLifecycleManager)
[2024-09-15 06:06:57,019] INFO [BrokerLifecycleManager id=2] The broker is in RECOVERY. (kafka.server.BrokerLifecycleManager)

Expected behavior

Expected to continue to have new event in my sentry

values.yaml

ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
hostname:
additionalHostNames:
- sentry.xxx.com
tls:
- hosts:
- sentry.xxx.com
secretName: xxx-com-wild-card-2023-2024

mail:

For example: smtp

backend: smtp
useTls: true
useSsl: false
username: "[email protected]"
password: "XXX"
port: 587
host: "outlook.office365.com"
from: "[email protected]"

nginx:
enabled: false

user:
create: true
email: [email protected]
password: xxx

system:
url: "https://sentry.xxx.com"

hooks:
activeDeadlineSeconds: 2000

Helm chart version

24.5.1

Steps to reproduce

Intall
Use it during 8 days

Screenshots

No response

Logs

No response

Additional context

No response

@patsevanton
Copy link
Contributor

Hi @lbcd ,

Could you please format the code using Markdown syntax? This will improve readability and ensure proper rendering.

Thanks!

@sdernbach-ionos
Copy link

@lbcd We have the same error, the timeframe depends on the data amount I guess as we already see the error after some hours.

@sherlant
Copy link

Hello, I have the same error, restart the kafka-controller clean the error, but she come back every day

@sdernbach-ionos
Copy link

at least for us the error somehow has changed with disabling the kafka persistence to this one (in various consumers):

arroyo.errors.OffsetOutOfRange: KafkaError{code=_AUTO_OFFSET_RESET,val=-140,str="fetch failed due to requested offset not available on the broker: Broker: Offset out of range (broker 1)"}

@raphaelluchini
Copy link

I have the same issue, tried a fresh install but no success :(

@sdernbach-ionos
Copy link

Regarding our error above I just found this one: getsentry/self-hosted#1894
So I will try to enable noStrictOffsetReset for the consumers and see if this helps and let you know

@patsevanton
Copy link
Contributor

patsevanton commented Sep 19, 2024

I am trying to reproduce the error, but the error is NOT reproducible. That's what I'm doing:

helm install sentry -n test sentry/sentry --version --version 25.8.1  -f values-sentry-local.yaml --wait --timeout=1000s

send traces

check logs:

stern -n test --max-log-requests 500 . | grep KafkaError

my values.yaml

user:
  password: "xxxxxxxxxxxxxxxxxxxxxxx"
  create: true
  email: [email protected]
system:
  url: "http://sentry.apatsev.org.ru"
ingress:
  enabled: true
  hostname: sentry.apatsev.org.ru
  ingressClassName: "nginx"
  regexPathStyle: nginx
  annotations:
    nginx.ingress.kubernetes.io/proxy-body-size: 200m
    # https://github.com/getsentry/self-hosted/issues/1927
    nginx.ingress.kubernetes.io/proxy-buffers-number: "16"
    nginx.ingress.kubernetes.io/proxy-buffer-size: "32k"
symbolicator:
  enabled: true

@raphaelluchini
Copy link

I could reproduce every time.
Obs: I am using TF to apply helm
Values:

values = [
    yamlencode({
      system = {
        url        = "https://sentry.xxx.com"
        adminEmail = "[email protected]"
        public     = true
      }
      asHook : false
      relay = {
        processing = {
          kafkaConfig = {
            messageMaxBytes = 50000000
          }
        }
      }
      sentry = {
        web = {
          replicas = 1
        }
        cron = {
          replicas = 1
        }
        slack = {
          client-id      = "xxx"
          existingSecret = "sentry"
        }
      }
      nginx = {
        enabled = false
      }

      ingress = {
        enabled          = true
        ingressClassName = "nginx"
        annotations = {
          "cert-manager.io/issuer"                      = "letsencrypt"
          "cert-manager.io/private-key-size"            = "4096"
          "cert-manager.io/private-key-rotation-policy" = "Always"
          # "kubernetes.io/ingress.class"                      = "nginx"
          "nginx.ingress.kubernetes.io/use-regex"            = "true"
          "nginx.ingress.kubernetes.io/proxy-buffers-number" = "16"
          "nginx.ingress.kubernetes.io/proxy-buffer-size"    = "32k"
        }
        hostname = "sentry.xxx.com"
        # additionalHostNames = ["sentry.xxx.com"]

        tls = [
          {
            hosts      = ["sentry.xxxcom"]
            secretName = "ingress-tls"
          }
        ]
      }
      mail = {
        backend        = "smtp"
        useTls         = true
        existingSecret = "sentry"
        username       = "xxx"
        port           = 587
        host           = "xxx"
        from           = "xxx"
      }
      kafka = {
        controller = {
          replicaCount = 1
          podSecurityContext = {
            seccompProfile = {
              type = "Unconfined"
            }
          }
        }
      }
      metrics = {
        enabled = true
        resources = {
          requests = {
            cpu    = "100m"
            memory = "128Mi"
          }
          limits = {
            cpu    = "100m"
            memory = "128Mi"
          }
        }
      }

      filestore = {
        filesystem = {
          persistence = {
            accessMode        = "ReadWriteMany"
            storageClass      = "csi-nas"
            persistentWorkers = false
          }
        }
      }

      postgresql = {
        enabled = false
      }

      slack = {
        client-id      = "xxx"
        existingSecret = "sentry"
      }

      github = {
        appId                          = "xxx"
        appName                        = "xxx"
        existingSecret                 = "sentry"
        existingSecretPrivateKeyKey    = "gh-private-key" 
        existingSecretWebhookSecretKey = "gh-webhook-secret" 
        existingSecretClientIdKey      = "gh-client-id"
        existingSecretClientSecretKey  = "gh-client-secret" 
      }

      externalPostgresql = {
        host           = "xxx"
        port           = 5432
        existingSecret = "sentry"
        existingSecretKeys = {
          username = "database-username"
          password = "database-password"
        }
        database = "sentry"
        sslMode  = "require"
      }

      serviceAccount = {
        "linkerd.io/inject" = "enabled"
      }
    }),
  ]

Some more logs before the stacktrace:

15:39:44 [ERROR] arroyo.processing.processor: Caught exception, shutting down...
15:39:44 [INFO] arroyo.processing.processor: Closing <arroyo.backends.kafka.consumer.KafkaConsumer object at 0x7f3f64334290>...
15:39:44 [INFO] arroyo.processing.processor: Partitions to revoke: [Partition(topic=Topic(name='generic-metrics-subscription-results'), index=0)]
15:39:44 [INFO] arroyo.processing.processor: Partition revocation complete.
15:39:44 [WARNING] arroyo.backends.kafka.consumer: failed to delete offset for unknown partition: Partition(topic=Topic(name='generic-metrics-subscription-results'), index=0)
15:39:44 [INFO] arroyo.processing.processor: Processor terminated
Traceback (most recent call last):

@sdernbach-ionos
Copy link

@patsevanton for us this only happens after few ours, while we have requests amounts of millions, so I'm not sure how many traces you sent in your test?

@patsevanton
Copy link
Contributor

patsevanton commented Sep 20, 2024

I left exception generation overnight.
i get Total Errors: 606K
I get some KafkaErrors:
stern -n test --max-log-requests 500 . | grep KafkaError

sentry-worker-56fd7ccdd8-jqm96 sentry-worker 22:34:32 [WARNING] sentry.eventstream.kafka.backend: Could not publish message (error: KafkaError{code=_MSG_TIMED_OUT,val=-192,str="Local: Message timed out"}): <cimpl.Message object at 0x7f93c06782c0>
sentry-worker-56fd7ccdd8-jqm96 sentry-worker 22:34:32 [WARNING] sentry.eventstream.kafka.backend: Could not publish message (error: KafkaError{code=_MSG_TIMED_OUT,val=-192,str="Local: Message timed out"}): <cimpl.Message object at 0x7f93c32c4e40>
sentry-snuba-subscription-consumer-transactions-6658fb4fb8dc67l sentry-snuba cimpl.KafkaException: KafkaError{code=_UNKNOWN_PARTITION,val=-190,str="Failed to get watermark offsets: Local: Unknown partition"}
sentry-snuba-subscription-consumer-transactions-6658fb4fb8dc67l sentry-snuba cimpl.KafkaException: KafkaError{code=_UNKNOWN_PARTITION,val=-190,str="Failed to get watermark offsets: Local: Unknown partition"}
sentry-snuba-replacer-8f948bff6-pmghn sentry-snuba cimpl.KafkaException: KafkaError{code=_UNKNOWN_PARTITION,val=-190,str="Failed to get watermark offsets: Local: Unknown partition"}
sentry-snuba-replacer-8f948bff6-pmghn sentry-snuba cimpl.KafkaException: KafkaError{code=_UNKNOWN_PARTITION,val=-190,str="Failed to get watermark offsets: Local: Unknown partition"}
sentry-post-process-forward-errors-c6b758fc-mkdnr sentry-post-process-forward-errors cimpl.KafkaException: KafkaError{code=_UNKNOWN_PARTITION,val=-190,str="Failed to get watermark offsets: Local: Unknown partition"}
sentry-post-process-forward-transactions-d59778968-tjjqb sentry-post-process-forward-transactions cimpl.KafkaException: KafkaError{code=_UNKNOWN_PARTITION,val=-190,str="Failed to get watermark offsets: Local: Unknown partition"}
sentry-post-process-forward-transactions-d59778968-tjjqb sentry-post-process-forward-transactions cimpl.KafkaException: KafkaError{code=_UNKNOWN_PARTITION,val=-190,str="Failed to get watermark offsets: Local: Unknown partition"}

but the problem is that Kafka didn't have enough space
stern -n test -l app.kubernetes.io/name=kafka | grep "No space left on device"

sentry-kafka-controller-0 kafka java.io.IOException: No space left on device
sentry-kafka-controller-0 kafka java.io.IOException: No space left on device

I think the reason Kafka took up the entire disk is that few users subtract data from Kafka

There are no "Offset out of range" errors

stern -n test --max-log-requests 500 . | grep "Offset out of range"

How many errors do you have in total per hour?

@sdernbach-ionos
Copy link

@patsevanton thank you for all the testing! At least we have disabled the persistence

  controller:
    replicaCount: 3
    persistence:
      enabled: false

Before that we had exactly the same Kafka error as you stated above (KafkaError{code=_UNKNOWN_PARTITION,val=-190,str="Failed to get watermark offsets: Local: Unknown partition"})

After setting the above one we get the error:

KafkaError{code=_AUTO_OFFSET_RESET,val=-140,str="fetch failed due to requested offset not available on the broker: Broker: Offset out of range (broker 1)"}

But as said I think this can be fixed with setting all noStrictOffsetReset to true, but I could not yet try as our Operating has to rollout the change. After that I will let you know how many errors are there per hour as well as if the change helped to fix the problem.

@raphaelluchini
Copy link

Disabling the persistence solved the issue for me. I am not sure if the problem was caused by podSecurityContext as well

@patsevanton
Copy link
Contributor

patsevanton commented Sep 20, 2024

I have reproduced the application error on HDD StorageClass. The app has been sending errors to sentry all day. Now it remains to understand what is the cause of the error and how to fix it. now I have 75K total errors in 24 hours.

The logs often contain messages like:

WARN [RaftManager id=0] Connection to node 1 (sentry-kafka-controller-1.sentry-kafka-controller-headless.test.svc.cluster.local/10.112.134.41:9093) could not be established. Node may not be available.

and

java.io.IOException: Connection to 2 was disconnected before the response was read

This may indicate problems with network interaction or that the Kafka broker does not have time to process requests due to high load.

I will try to test various sentry configuration options.

@sdernbach-ionos
Copy link

@patsevanton thanks for your help! We tried to deploy the newest chart version with new config to test if noStrictOffsetReset: true helps but now nothing works anymore, even redis is not starting correctly. So we first have to check why this happens and if the newest chart version does not work for us.

But I remember the first time it happened we had an error in one of the applications that caused a lot of error messages so the number of errors was at least 400k / hour + transaction events

@lbcd
Copy link
Author

lbcd commented Sep 21, 2024

@raphaelluchini @sdernbach-ionos how did you disabled persistence? In which file?

@sdernbach-ionos
Copy link

@lbcd within the values.yaml you have to configure that:

kafka:
  enabled: true
  ....
  controller:
    replicaCount: 3 # take the number you have configured currently
    persistence:
      enabled: false

so the persistence.enabled: false within controller disables the persistence

@patsevanton
Copy link
Contributor

patsevanton commented Sep 24, 2024

Take the following configuration as a basis for testing

user:
  password: "xxxxxxxxx"
  create: true
  email: [email protected]
system:
  url: "http://sentry.apatsev.org.ru"
hooks:
  activeDeadlineSeconds: 600
ingress:
  enabled: true
  hostname: sentry.apatsev.org.ru
  ingressClassName: "nginx"
  regexPathStyle: nginx
  annotations:
    nginx.ingress.kubernetes.io/proxy-body-size: 200m
    # https://github.com/getsentry/self-hosted/issues/1927
    nginx.ingress.kubernetes.io/proxy-buffers-number: "16"
    nginx.ingress.kubernetes.io/proxy-buffer-size: "32k"
symbolicator:
  enabled: true
postgresql:
  primary:
    extraEnvVars:
      - name: POSTGRESQL_MAX_CONNECTIONS
        value: "400"
kafka:
  provisioning:
    replicationFactor: 3
  heapOpts: -Xmx4096m -Xms4096m
  global:
    storageClass: ssd # need change
  controller:
    persistence:
      size: 100Gi
    logPersistence:
      size: 100Gi
    resourcesPreset: large
  broker:
    persistence:
      size: 100Gi
    logPersistence:
      size: 100Gi
    resourcesPreset: large

image
"Last Seen" is not updated, but "Events" are increasing

write about your results and your values.yaml

@raphaelluchini
Copy link

Hey, for me 2 things combined worked.
Disable the storage as @sdernbach-ionos mentioned and Increasing Kafka controller replicas to 3. Somehow Kafka expect at least 3 instances running. Maybe that should be documented somehow.
Later on i've enabled the storage again and everything is working now.

I have some other issues with health checks, but it's not related to this issue.

@sdernbach-ionos
Copy link

@patsevanton sadly we could still not test it, the redis problem occurred as our account reached its ram limits so we first have to get more resources but I was on vacation, I will come back as soon as we were able to test it.

@sdernbach-ionos
Copy link

sdernbach-ionos commented Oct 24, 2024

@patsevanton sorry it took quite some time to fix the other problems but our redis is stable now and has enough space so we tried your configuration with resourcePreset, replicationFactor but left our storage disabled for broker and controller to not run into space problems. I also enabled noStrictOffsetReset: true everywhere and tried autoOffsetReset: "earliest" and now autoOffsetReset: "latest" (as I think it is fine if we loose some events), the events and transactions are still showing up since ~24h but a lot of pods are throwing these errors:

KafkaError{code=UNKNOWN_TOPIC_OR_PART,val=3,str="Subscribed topic not available: transactions-subscription-results: Broker: Unknown topic or partition"}

I just saw that we forgot the heapOpts that you mentioned, so will also add this and try a fresh install and see if it fixes it but the error sounds somehow different

I just checked this: https://quix.io/blog/how-to-fix-unknown-partition-error-kafka
Could the replication factor be a problem?

Every of the 3 kafka controller pods says the same with running:

sentry-kafka-controller-1:/$ kafka-topics.sh --bootstrap-server=localhost:9092 --list
__consumer_offsets
events
generic-events
group-attributes
ingest-attachments
ingest-events
ingest-metrics
ingest-monitors
ingest-occurrences
ingest-performance-metrics
ingest-transactions
monitors-clock-tick
outcomes
outcomes-billing
snuba-commit-log
snuba-generic-events-commit-log
snuba-generic-metrics
snuba-generic-metrics-counters-commit-log
snuba-generic-metrics-distributions-commit-log
snuba-generic-metrics-sets-commit-log
snuba-metrics
snuba-spans
snuba-transactions-commit-log
transactions

And within this list there is no transactions-subscription-results so the error seems to be correct?

@patsevanton
Copy link
Contributor

patsevanton commented Oct 25, 2024

@sdernbach-ionos can you share minimal reproducible values.yaml ? Can you share minimal example or instructions for for send exception using examples from the Internet for benchmark ?

@lbcd
Copy link
Author

lbcd commented Oct 25, 2024

@lbcd within the values.yaml you have to configure that:

kafka:
  enabled: true
  ....
  controller:
    replicaCount: 3 # take the number you have configured currently
    persistence:
      enabled: false

so the persistence.enabled: false within controller disables the persistence

Hello @sdernbach-ionos

I put that in values.yaml:

kafka:
  controller:
    persistence:
      enabled: false

and I run
helm install sentry sentry/sentry -f values.yaml -n sentry --timeout 20m0s

Installation works. But I get this message :
coalesce.go:237: warning: skipped value for kafka.config: Not a table.

So I think persistence still enabled. And I still have a sentry that works for 2 weeks and start to failed.

@sdernbach-ionos
Copy link

@lbcd this is an error that I always got already in the past, but you can use eg. Lens to check the pvc and would see that there is no pvc created, so actually the config works and no storage is created.
Can you check the logs of the pods and states to see if it is maybe the same issue that we have?

@patsevanton to be honest it is hard to see which part is influencing this. But I really do not get why topics are getting deleted. We did a fresh install today with heapOpts activated as well. After the install the topic list shows this:

__consumer_offsets
cdc
event-replacements
events
events-subscription-results
generic-events
generic-metrics-subscription-results
group-attributes
ingest-attachments
ingest-events
ingest-metrics
ingest-monitors
ingest-occurrences
ingest-performance-metrics
ingest-replay-events
ingest-replay-recordings
ingest-sessions
ingest-transactions
metrics-subscription-results
monitors-clock-tick
outcomes
outcomes-billing
processed-profiles
profiles
profiles-call-tree
scheduled-subscriptions-events
scheduled-subscriptions-generic-metrics-counters
scheduled-subscriptions-generic-metrics-distributions
scheduled-subscriptions-generic-metrics-gauges
scheduled-subscriptions-generic-metrics-sets
scheduled-subscriptions-metrics
scheduled-subscriptions-sessions
scheduled-subscriptions-transactions
sessions-subscription-results
shared-resources-usage
snuba-attribution
snuba-commit-log
snuba-dead-letter-generic-events
snuba-dead-letter-generic-metrics
snuba-dead-letter-group-attributes
snuba-dead-letter-metrics
snuba-dead-letter-querylog
snuba-dead-letter-replays
snuba-dead-letter-sessions
snuba-generic-events-commit-log
snuba-generic-metrics
snuba-generic-metrics-counters-commit-log
snuba-generic-metrics-distributions-commit-log
snuba-generic-metrics-gauges-commit-log
snuba-generic-metrics-sets-commit-log
snuba-metrics
snuba-metrics-commit-log
snuba-metrics-summaries
snuba-profile-chunks
snuba-queries
snuba-sessions-commit-log
snuba-spans
snuba-transactions-commit-log
transactions
transactions-subscription-results

After few hours (3-6h) the pods start to tell about missing topics and if I execute the list topics command they are also not shown on any of the kafka controllers anymore (and kafka controller has 0 restarts) ... so I really don't know why the topics are getting deleted

@patsevanton
Copy link
Contributor

@sdernbach-ionos can you share minimal reproducible values.yaml ?

@sdernbach-ionos
Copy link

@patsevanton here is the full values.yaml (we are using external redis and external postgres, I just re-enabled the default ones again here). Everything that is different from default values.yaml is marked with # CHANGED

prefix:

# Set this to true to support IPV6 networks
ipv6: false

user:
  create: true
  email: [email protected] # CHANGED
  #password: aaaa # CHANGED

  ## set this value to an existingSecret name to create the admin user with the password in the secret
  existingSecret: sentry-admin # CHANGED

  ## set this value to an existingSecretKey which holds the password to be used for sentry admin user default key is `admin-password`
  existingSecretKey: password # CHANGED

# this is required on the first installation, as sentry has to be initialized first
# recommended to set false for updating the helm chart afterwards,
# as you will have some downtime on each update if it's a hook
# deploys relay & snuba consumers as post hooks
asHook: true # CHANGED

images:
  sentry:
    # repository: getsentry/sentry
    # tag: Chart.AppVersion
    # pullPolicy: IfNotPresent
    imagePullSecrets: []
  snuba:
    # repository: getsentry/snuba
    # tag: Chart.AppVersion
    # pullPolicy: IfNotPresent
    imagePullSecrets: []
  relay:
    # repository: getsentry/relay
    # tag: Chart.AppVersion
    # pullPolicy: IfNotPresent
    imagePullSecrets: []
  symbolicator:
    # repository: getsentry/symbolicator
    # tag: Chart.AppVersion
    # pullPolicy: IfNotPresent
    imagePullSecrets: []
  vroom:
    # repository: getsentry/vroom
    # tag: Chart.AppVersion
    # pullPolicy: IfNotPresent
    imagePullSecrets: []

serviceAccount:
  # serviceAccount.annotations -- Additional Service Account annotations.
  annotations: {}
  # serviceAccount.enabled -- If `true`, a custom Service Account will be used.
  enabled: false
  # serviceAccount.name -- The base name of the ServiceAccount to use. Will be appended with e.g. `snuba-api` or `web` for the pods accordingly.
  name: "sentry"
  # serviceAccount.automountServiceAccountToken -- Automount API credentials for a Service Account.
  automountServiceAccountToken: true

vroom:
  # annotations: {}
  # args: []
  replicas: 1
  env: []
  probeFailureThreshold: 5
  probeInitialDelaySeconds: 10
  probePeriodSeconds: 10
  probeSuccessThreshold: 1
  probeTimeoutSeconds: 2
  resources: {}
  #  requests:
  #    cpu: 100m
  #    memory: 700Mi
  affinity: {}
  nodeSelector: {}
  securityContext: {}
  containerSecurityContext: {}
  # priorityClassName: ""
  service:
    annotations: {}
  # tolerations: []
  # podLabels: {}

  autoscaling:
    enabled: false
    minReplicas: 2
    maxReplicas: 5
    targetCPUUtilizationPercentage: 50
  sidecars: []
  # topologySpreadConstraints: []
  volumes: []
  volumeMounts: []

relay:
  enabled: true
  # annotations: {}
  replicas: 1
  # args: []
  mode: managed
  env: []
  probeFailureThreshold: 5
  probeInitialDelaySeconds: 10
  probePeriodSeconds: 10
  probeSuccessThreshold: 1
  probeTimeoutSeconds: 2
  resources: {}
  #  requests:
  #    cpu: 100m
  #    memory: 700Mi
  affinity: {}
  nodeSelector: {}
  # healthCheck:
  #   readinessRequestPath: ""
  securityContext: {}
  # if you are using GKE Ingress controller use 'securityPolicy' to add Google Cloud Armor Ingress policy
  securityPolicy: ""
  # if you are using GKE Ingress controller use 'customResponseHeaders' to add custom response header
  customResponseHeaders: []
  containerSecurityContext: {}
  service:
    annotations: {}
  # tolerations: []
  # podLabels: {}
  # priorityClassName: ""
  autoscaling:
    enabled: false
    minReplicas: 2
    maxReplicas: 5
    targetCPUUtilizationPercentage: 50
  sidecars: []
  topologySpreadConstraints: []
  volumes: []
  volumeMounts: []
  init:
    resources: {}
    # additionalArgs: []
    # credentialsSubcommand: ""
    # env: []
    # volumes: []
    # volumeMounts: []
  # cache:
  #   envelopeBufferSize: 1000
  # logging:
  #   level: info
  #   format: json
  processing:
    kafkaConfig:
      messageMaxBytes: 50000000
    #  messageTimeoutMs:
    #  requestTimeoutMs:
    #  deliveryTimeoutMs:
    #  apiVersionRequestTimeoutMs:

    # additionalKafkaConfig:
    #  - name: security.protocol
    #    value: "SSL"

# enable and reference the volume
geodata:
  accountID: ""
  licenseKey: ""
  editionIDs: ""
  persistence:
    ## If defined, storageClassName: <storageClass>
    ## If undefined (the default) or set to null, no storageClassName spec is
    ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
    ##   GKE, AWS & OpenStack)
    # storageClass: "" # for example: csi-s3
    size: 1Gi
  volumeName: "" # for example: data-sentry-geoip
  # mountPath of the volume containing the database
  mountPath: "" # for example: /usr/share/GeoIP
  # path to the geoip database inside the volumemount
  path: "" # for example: /usr/share/GeoIP/GeoLite2-City.mmdb

sentry:
  # to not generate a sentry-secret, use these 2 values to reference an existing secret
  # existingSecret: "my-secret"
  # existingSecretKey: "my-secret-key"
  singleOrganization: true
  web:
    enabled: true
    # if using filestore backend filesystem with RWO access, set strategyType to Recreate
    strategyType: RollingUpdate
    replicas: 1
    env: []
    existingSecretEnv: []
    probeFailureThreshold: 5
    probeInitialDelaySeconds: 10
    probePeriodSeconds: 10
    probeSuccessThreshold: 1
    probeTimeoutSeconds: 2
    resources: {}
    #  requests:
    #    cpu: 200m
    #    memory: 850Mi
    affinity: {}
    nodeSelector: {}
    securityContext: {}
    # if you are using GKE Ingress controller use 'securityPolicy' to add Google Cloud Armor Ingress policy
    securityPolicy: ""
    # if you are using GKE Ingress controller use 'customResponseHeaders' to add custom response header
    customResponseHeaders: []
    containerSecurityContext: {}
    service:
      annotations: {}
    # tolerations: []
    # podLabels: {}
    # Mount and use custom CA
    # customCA:
    #   secretName: custom-ca
    #   item: ca.crt
    # logLevel: "WARNING" # DEBUG|INFO|WARNING|ERROR|CRITICAL|FATAL
    # logFormat: "human" # human|machine
    autoscaling:
      enabled: false
      minReplicas: 2
      maxReplicas: 5
      targetCPUUtilizationPercentage: 50
    sidecars: []
    topologySpreadConstraints: []
    volumes: []
    volumeMounts: []
    # workers: 3

  features:
    orgSubdomains: false
    vstsLimitedScopes: true
    enableProfiling: true # CHANGED
    enableSessionReplay: true
    enableFeedback: true # CHANGED
    enableSpan: true # CHANGED

  # example customFeature to enable Metrics(beta) https://docs.sentry.io/product/metrics/
  customFeatures: # CHANGED
    - organizations:custom-metric # CHANGED
    - organizations:custom-metrics-experimental # CHANGED
    - organizations:derive-code-mappings # CHANGED

  worker:
    enabled: true
    # annotations: {}
    replicas: 3 # CHANGED
    # concurrency: 4
    env: []
    existingSecretEnv: []
    resources: {}
    #  requests:
    #    cpu: 1000m
    #    memory: 1100Mi
    affinity: {}
    nodeSelector: {}
    # tolerations: []
    # podLabels: {}
    # logLevel: "WARNING" # DEBUG|INFO|WARNING|ERROR|CRITICAL|FATAL
    # logFormat: "machine" # human|machine
    # excludeQueues: ""
    # it's better to use prometheus adapter and scale based on
    # the size of the rabbitmq queue
    autoscaling:
      enabled: false
      minReplicas: 2
      maxReplicas: 5
      targetCPUUtilizationPercentage: 50
    livenessProbe:
      enabled: false # CHANGED
      periodSeconds: 60
      timeoutSeconds: 10
      failureThreshold: 3
    sidecars: []
    # securityContext: {}
    # containerSecurityContext: {}
    # priorityClassName: ""
    topologySpreadConstraints: []
    volumes: []
    volumeMounts: []

  # allows to dedicate some workers to specific queues
  workerEvents:
    ## If the number of exceptions increases, it is recommended to enable workerEvents
    enabled: false
    # annotations: {}
    queues: "events.save_event,post_process_errors"
    ## When increasing the number of exceptions and enabling workerEvents, it is recommended to increase the number of their replicas
    replicas: 1
    # concurrency: 4
    env: []
    resources: {}
    affinity: {}
    nodeSelector: {}
    # tolerations: []
    # podLabels: {}
    # logLevel: "WARNING" # DEBUG|INFO|WARNING|ERROR|CRITICAL|FATAL
    # logFormat: "machine" # human|machine
    # it's better to use prometheus adapter and scale based on
    # the size of the rabbitmq queue
    autoscaling:
      enabled: false
      minReplicas: 2
      maxReplicas: 5
      targetCPUUtilizationPercentage: 50
    livenessProbe:
      enabled: false
      periodSeconds: 60
      timeoutSeconds: 10
      failureThreshold: 3
    sidecars: []
    # securityContext: {}
    # containerSecurityContext: {}
    # priorityClassName: ""
    topologySpreadConstraints: []
    volumes: []
    volumeMounts: []

  # allows to dedicate some workers to specific queues
  workerTransactions:
    enabled: false
    # annotations: {}
    queues: "events.save_event_transaction,post_process_transactions"
    replicas: 1
    # concurrency: 4
    env: []
    resources: {}
    affinity: {}
    nodeSelector: {}
    # tolerations: []
    # podLabels: {}
    # logLevel: "WARNING" # DEBUG|INFO|WARNING|ERROR|CRITICAL|FATAL
    # logFormat: "machine" # human|machine
    # it's better to use prometheus adapter and scale based on
    # the size of the rabbitmq queue
    autoscaling:
      enabled: false
      minReplicas: 2
      maxReplicas: 5
      targetCPUUtilizationPercentage: 50
    livenessProbe:
      enabled: false
      periodSeconds: 60
      timeoutSeconds: 10
      failureThreshold: 3
    sidecars: []
    # securityContext: {}
    # containerSecurityContext: {}
    # priorityClassName: ""
    topologySpreadConstraints: []
    volumes: []
    volumeMounts: []

  ingestConsumerAttachments:
    enabled: true
    replicas: 1
    # concurrency: 4
    env: []
    resources: {}
    #  requests:
    #    cpu: 200m
    #    memory: 700Mi
    affinity: {}
    nodeSelector: {}
    securityContext: {}
    containerSecurityContext: {}
    # tolerations: []
    # podLabels: {}
    # maxBatchSize: ""
    # logLevel: info
    # it's better to use prometheus adapter and scale based on
    # the size of the rabbitmq queue
    autoscaling:
      enabled: false
      minReplicas: 1
      maxReplicas: 3
      targetCPUUtilizationPercentage: 50
    sidecars: []
    topologySpreadConstraints: []
    volumes: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    # volumeMounts:
    #   - mountPath: /dev/shm
    #     name: dshm
    autoOffsetReset: "latest" # CHANGED
    noStrictOffsetReset: true # CHANGED

  ingestConsumerEvents:
    enabled: true
    replicas: 1
    # concurrency: 4
    env: []
    resources: {}
    #  requests:
    #    cpu: 300m
    #    memory: 500Mi
    affinity: {}
    nodeSelector: {}
    securityContext: {}
    containerSecurityContext: {}
    # tolerations: []
    # podLabels: {}
    # maxBatchSize: ""
    # logLevel: "info"
    # inputBlockSize: ""
    # maxBatchTimeMs: ""

    # it's better to use prometheus adapter and scale based on
    # the size of the rabbitmq queue
    autoscaling:
      enabled: false
      minReplicas: 1
      maxReplicas: 3
      targetCPUUtilizationPercentage: 50
    sidecars: []
    topologySpreadConstraints: []
    volumes: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    # volumeMounts:
    #   - mountPath: /dev/shm
    #     name: dshm
    autoOffsetReset: "latest" # CHANGED
    noStrictOffsetReset: true # CHANGED

  ingestConsumerTransactions:
    enabled: true
    replicas: 1
    # concurrency: 4
    env: []
    resources: {}
    #  requests:
    #    cpu: 200m
    #    memory: 500Mi
    affinity: {}
    nodeSelector: {}
    securityContext: {}
    containerSecurityContext: {}
    # tolerations: []
    # podLabels: {}
    # maxBatchSize: ""
    # logLevel: "info"
    # inputBlockSize: ""
    # maxBatchTimeMs: ""

    # it's better to use prometheus adapter and scale based on
    # the size of the rabbitmq queue
    autoscaling:
      enabled: false
      minReplicas: 1
      maxReplicas: 3
      targetCPUUtilizationPercentage: 50
    sidecars: []
    topologySpreadConstraints: []
    volumes: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    # volumeMounts:
    #   - mountPath: /dev/shm
    #     name: dshm
    autoOffsetReset: "latest" # CHANGED
    noStrictOffsetReset: true # CHANGED

  ingestReplayRecordings:
    enabled: true
    replicas: 1
    env: []
    resources: {}
    #  requests:
    #    cpu: 100m
    #    memory: 250Mi
    affinity: {}
    nodeSelector: {}
    securityContext: {}
    containerSecurityContext: {}
    # tolerations: []
    # podLabels: {}
    # it's better to use prometheus adapter and scale based on
    # the size of the rabbitmq queue
    autoscaling:
      enabled: false
      minReplicas: 1
      maxReplicas: 3
      targetCPUUtilizationPercentage: 50
    sidecars: []
    topologySpreadConstraints: []
    volumes: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    # volumeMounts:
    #   - mountPath: /dev/shm
    #     name: dshm
    autoOffsetReset: "latest" # CHANGED
    noStrictOffsetReset: true # CHANGED

  ingestProfiles:
    replicas: 1
    env: []
    resources: {}
    affinity: {}
    nodeSelector: {}
    securityContext: {}
    containerSecurityContext: {}
    # tolerations: []
    # podLabels: {}
    # it's better to use prometheus adapter and scale based on
    # the size of the rabbitmq queue
    autoscaling:
      enabled: false
      minReplicas: 1
      maxReplicas: 3
      targetCPUUtilizationPercentage: 50
    sidecars: []
    topologySpreadConstraints: []
    volumes: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    # volumeMounts:
    #   - mountPath: /dev/shm
    #     name: dshm
    autoOffsetReset: "latest" # CHANGED
    noStrictOffsetReset: true # CHANGED

  ingestOccurrences:
    enabled: true
    replicas: 1
    env: []
    resources: {}
    #  requests:
    #    cpu: 100m
    #    memory: 250Mi
    affinity: {}
    nodeSelector: {}
    securityContext: {}
    containerSecurityContext: {}
    # tolerations: []
    # podLabels: {}
    # it's better to use prometheus adapter and scale based on
    # the size of the rabbitmq queue
    autoscaling:
      enabled: false
      minReplicas: 1
      maxReplicas: 3
      targetCPUUtilizationPercentage: 50
    sidecars: []
    topologySpreadConstraints: []
    volumes: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    # volumeMounts:
    #   - mountPath: /dev/shm
    #     name: dshm
    autoOffsetReset: "latest" # CHANGED
    noStrictOffsetReset: true # CHANGED

  ingestMonitors:
    enabled: true
    replicas: 1
    env: []
    resources: {}
    #  requests:
    #    cpu: 100m
    #    memory: 250Mi
    affinity: {}
    nodeSelector: {}
    securityContext: {}
    containerSecurityContext: {}
    # tolerations: []
    # podLabels: {}
    # it's better to use prometheus adapter and scale based on
    # the size of the rabbitmq queue
    autoscaling:
      enabled: false
      minReplicas: 1
      maxReplicas: 3
      targetCPUUtilizationPercentage: 50
    sidecars: []
    topologySpreadConstraints: []
    volumes: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    # volumeMounts:
    #   - mountPath: /dev/shm
    #     name: dshm
    autoOffsetReset: "latest" # CHANGED
    noStrictOffsetReset: true # CHANGED

  billingMetricsConsumer:
    enabled: true
    replicas: 1
    env: []
    resources: {}
    #  requests:
    #    cpu: 100m
    #    memory: 250Mi
    affinity: {}
    nodeSelector: {}
    securityContext: {}
    containerSecurityContext: {}
    # tolerations: []
    # podLabels: {}
    # it's better to use prometheus adapter and scale based on
    # the size of the rabbitmq queue
    autoscaling:
      enabled: false
      minReplicas: 1
      maxReplicas: 3
      targetCPUUtilizationPercentage: 50
    sidecars: []
    topologySpreadConstraints: []
    volumes: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    # volumeMounts:
    #   - mountPath: /dev/shm
    #     name: dshm
    autoOffsetReset: "latest" # CHANGED
    noStrictOffsetReset: true # CHANGED

  genericMetricsConsumer:
    enabled: true
    replicas: 1
    # concurrency: 4
    env: []
    resources: {}
    #  requests:
    #    cpu: 200m
    #    memory: 500Mi
    affinity: {}
    nodeSelector: {}
    securityContext: {}
    containerSecurityContext: {}
    # tolerations: []
    # podLabels: {}
    # maxPollIntervalMs: ""
    # logLevel: "info"
    # it's better to use prometheus adapter and scale based on
    # the size of the rabbitmq queue
    autoscaling:
      enabled: false
      minReplicas: 1
      maxReplicas: 3
      targetCPUUtilizationPercentage: 50
    sidecars: []
    topologySpreadConstraints: []
    volumes: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    # volumeMounts:
    #   - mountPath: /dev/shm
    #     name: dshm
    autoOffsetReset: "latest" # CHANGED
    noStrictOffsetReset: true # CHANGED

  metricsConsumer:
    enabled: true
    replicas: 1
    # concurrency: 4
    env: []
    resources: {}
    #  requests:
    #    cpu: 200m
    #    memory: 500Mi
    affinity: {}
    nodeSelector: {}
    securityContext: {}
    containerSecurityContext: {}
    # tolerations: []
    # podLabels: {}
    # logLevel: "info"
    # maxPollIntervalMs: ""
    # it's better to use prometheus adapter and scale based on
    # the size of the rabbitmq queue
    autoscaling:
      enabled: false
      minReplicas: 1
      maxReplicas: 3
      targetCPUUtilizationPercentage: 50
    sidecars: []
    topologySpreadConstraints: []
    volumes: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    # volumeMounts:
    #   - mountPath: /dev/shm
    #     name: dshm
    autoOffsetReset: "latest" # CHANGED
    noStrictOffsetReset: true # CHANGED

  cron:
    enabled: true
    replicas: 1
    env: []
    resources: {}
    affinity: {}
    nodeSelector: {}
    # tolerations: []
    # podLabels: {}
    sidecars: []
    topologySpreadConstraints: []
    volumes: []
    # volumeMounts: []
    # logLevel: "WARNING" # DEBUG|INFO|WARNING|ERROR|CRITICAL|FATAL
    # logFormat: "machine" # human|machine

  subscriptionConsumerEvents:
    enabled: true
    replicas: 1
    env: []
    resources: {}
    #  requests:
    #    cpu: 200m
    #    memory: 500Mi
    affinity: {}
    nodeSelector: {}
    securityContext: {}
    containerSecurityContext: {}
    # tolerations: []
    # podLabels: {}
    sidecars: []
    topologySpreadConstraints: []
    volumes: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    autoOffsetReset: "latest" # CHANGED
    noStrictOffsetReset: true # CHANGED
    # volumeMounts: []

  subscriptionConsumerSessions:
    replicas: 1
    env: []
    resources: {}
    affinity: {}
    nodeSelector: {}
    securityContext: {}
    containerSecurityContext: {}
    # tolerations: []
    # podLabels: {}
    sidecars: []
    topologySpreadConstraints: []
    volumes: []
    autoOffsetReset: "latest" # CHANGED
    noStrictOffsetReset: true # CHANGED
    # volumeMounts: []

  subscriptionConsumerTransactions:
    enabled: true
    replicas: 1
    env: []
    resources: {}
    #  requests:
    #    cpu: 200m
    #    memory: 500Mi
    affinity: {}
    nodeSelector: {}
    securityContext: {}
    containerSecurityContext: {}
    # tolerations: []
    # podLabels: {}
    sidecars: []
    topologySpreadConstraints: []
    volumes: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    autoOffsetReset: "latest" # CHANGED
    noStrictOffsetReset: true # CHANGED
    # volumeMounts: []

  postProcessForwardErrors:
    enabled: true
    replicas: 1
    env: []
    resources: {}
    #  requests:
    #    cpu: 150m
    #    memory: 500Mi
    affinity: {}
    nodeSelector: {}
    securityContext: {}
    containerSecurityContext: {}
    # tolerations: []
    # podLabels: {}
    sidecars: []
    topologySpreadConstraints: []
    volumes: []
    # volumeMounts: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    autoOffsetReset: "latest" # CHANGED
    noStrictOffsetReset: true # CHANGED

  postProcessForwardTransactions:
    enabled: true
    replicas: 1
    # processes: 1
    env: []
    resources: {}
    #  requests:
    #    cpu: 200m
    #    memory: 500Mi
    affinity: {}
    nodeSelector: {}
    securityContext: {}
    containerSecurityContext: {}
    # tolerations: []
    # podLabels: {}
    sidecars: []
    topologySpreadConstraints: []
    volumes: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    # volumeMounts: []
    autoOffsetReset: "latest" # CHANGED
    noStrictOffsetReset: true # CHANGED

  postProcessForwardIssuePlatform:
    enabled: true
    replicas: 1
    env: []
    resources: {}
    #  requests:
    #    cpu: 300m
    #    memory: 500Mi
    affinity: {}
    nodeSelector: {}
    securityContext: {}
    containerSecurityContext: {}
    # tolerations: []
    # podLabels: {}
    sidecars: []
    topologySpreadConstraints: []
    volumes: []
    # volumeMounts: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    autoOffsetReset: "latest" # CHANGED
    noStrictOffsetReset: true # CHANGED

  subscriptionConsumerGenericMetrics:
    enabled: true
    replicas: 1
    # concurrency: 1
    env: []
    resources: {}
    #  requests:
    #    cpu: 200m
    #    memory: 500Mi
    affinity: {}
    nodeSelector: {}
    securityContext: {}
    containerSecurityContext: {}
    # tolerations: []
    # podLabels: {}
    sidecars: []
    topologySpreadConstraints: []
    volumes: []
    # volumeMounts: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    autoOffsetReset: "latest" # CHANGED
    noStrictOffsetReset: true # CHANGED

  subscriptionConsumerMetrics:
    enabled: true
    replicas: 1
    # concurrency: 1
    env: []
    resources: {}
    #  requests:
    #    cpu: 200m
    #    memory: 500Mi
    affinity: {}
    nodeSelector: {}
    securityContext: {}
    containerSecurityContext: {}
    # tolerations: []
    # podLabels: {}
    sidecars: []
    topologySpreadConstraints: []
    volumes: []
    # volumeMounts: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    autoOffsetReset: "latest" # CHANGED
    noStrictOffsetReset: true # CHANGED

  cleanup:
    successfulJobsHistoryLimit: 5
    failedJobsHistoryLimit: 5
    activeDeadlineSeconds: 1200 # CHANGED
    concurrencyPolicy: Allow
    concurrency: 1
    enabled: true
    schedule: "0 0 * * *"
    days: 30 # CHANGED
    # logLevel: INFO
    logLevel: ''
    # securityContext: {}
    # containerSecurityContext: {}
    sidecars: []
    volumes: []
    # volumeMounts: []
    serviceAccount: {}

snuba:
  api:
    enabled: true
    replicas: 1
    # set command to ["snuba","api"] if securityContext.runAsUser > 0
    # see: https://github.com/getsentry/snuba/issues/956
    command: []
    #   - snuba
    #   - api
    env: []
    probeInitialDelaySeconds: 10
    liveness:
      timeoutSeconds: 2
    readiness:
      timeoutSeconds: 2
    resources: {}
    #  requests:
    #    cpu: 100m
    #    memory: 150Mi
    affinity: {}
    nodeSelector: {}
    securityContext: {}
    containerSecurityContext: {}
    service:
      annotations: {}
    # tolerations: []
    # podLabels: {}

    autoscaling:
      enabled: false
      minReplicas: 2
      maxReplicas: 5
      targetCPUUtilizationPercentage: 50
    sidecars: []
    topologySpreadConstraints: []
    volumes: []
    # volumeMounts: []

  consumer:
    enabled: true
    replicas: 1
    env: []
    resources: {}
    affinity: {}
    nodeSelector: {}
    securityContext: {}
    topologySpreadConstraints: []
    containerSecurityContext: {}
    # tolerations: []
    # podLabels: {}
    autoOffsetReset: "latest" # CHANGED
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    noStrictOffsetReset: true # CHANGED
    # maxBatchSize: ""
    # processes: ""
    # inputBlockSize: ""
    # outputBlockSize: ""
    maxBatchTimeMs: 750
    # queuedMaxMessagesKbytes: ""
    # queuedMinMessages: ""
    # volumeMounts:
    #   - mountPath: /dev/shm
    #     name: dshm
    # volumes:
    #   - name: dshm
    #     emptyDir:
    #       medium: Memory

  outcomesConsumer:
    enabled: true
    replicas: 1
    env: []
    resources: {}
    affinity: {}
    nodeSelector: {}
    securityContext: {}
    topologySpreadConstraints: []
    containerSecurityContext: {}
    # tolerations: []
    # podLabels: {}
    autoOffsetReset: "latest" # CHANGED
    noStrictOffsetReset: true # CHANGED
    maxBatchSize: "3"
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    # processes: ""
    # inputBlockSize: ""
    # outputBlockSize: ""
    # maxBatchTimeMs: ""
    # queuedMaxMessagesKbytes: ""
    # queuedMinMessages: ""
    # volumeMounts:
    #   - mountPath: /dev/shm
    #     name: dshm
    # volumes:
    #   - name: dshm
    #     emptyDir:
    #       medium: Memory

  outcomesBillingConsumer:
    enabled: true
    replicas: 1
    env: []
    resources: {}
    affinity: {}
    nodeSelector: {}
    securityContext: {}
    topologySpreadConstraints: []
    containerSecurityContext: {}
    # tolerations: []
    # podLabels: {}
    autoOffsetReset: "latest" # CHANGED
    noStrictOffsetReset: true # CHANGED
    maxBatchSize: "3"
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    # processes: ""
    # inputBlockSize: ""
    # outputBlockSize: ""
    maxBatchTimeMs: 750
    # queuedMaxMessagesKbytes: ""
    # queuedMinMessages: ""
    # volumeMounts:
    #   - mountPath: /dev/shm
    #     name: dshm
    # volumes:
    #   - name: dshm
    #     emptyDir:
    #       medium: Memory

  replacer:
    enabled: true
    replicas: 1
    env: []
    resources: {}
    affinity: {}
    nodeSelector: {}
    securityContext: {}
    topologySpreadConstraints: []
    containerSecurityContext: {}
    # tolerations: []
    # podLabels: {}
    autoOffsetReset: "latest" # CHANGED
    # maxBatchTimeMs: ""
    # queuedMaxMessagesKbytes: ""
    # queuedMinMessages: ""
    # volumes: []
    # volumeMounts: []

  metricsConsumer:
    enabled: true
    replicas: 1
    env: []
    resources: {}
    affinity: {}
    nodeSelector: {}
    securityContext: {}
    topologySpreadConstraints: []
    containerSecurityContext: {}
    # tolerations: []
    # podLabels: {}
    autoOffsetReset: "latest" # CHANGED
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    # volumes: []
    # volumeMounts: []
    # maxBatchSize: ""
    # processes: ""
    # inputBlockSize: ""
    # outputBlockSize: ""
    maxBatchTimeMs: 750
    # queuedMaxMessagesKbytes: ""
    # queuedMinMessages: ""
    noStrictOffsetReset: true # CHANGED

  subscriptionConsumerEvents:
    enabled: true
    replicas: 1
    env: []
    resources: {}
    affinity: {}
    nodeSelector: {}
    securityContext: {}
    topologySpreadConstraints: []
    containerSecurityContext: {}
    # tolerations: []
    # podLabels: {}
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    # volumes: []
    # volumeMounts: []
    autoOffsetReset: "latest" # CHANGED
    noStrictOffsetReset: true # CHANGED

  genericMetricsCountersConsumer:
    enabled: true
    replicas: 1
    env: []
    resources: {}
    affinity: {}
    nodeSelector: {}
    securityContext: {}
    topologySpreadConstraints: []
    containerSecurityContext: {}
    # tolerations: []
    # podLabels: {}
    autoOffsetReset: "latest" # CHANGED
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    # volumes: []
    # volumeMounts: []
    # maxBatchSize: ""
    # processes: ""
    # inputBlockSize: ""
    # outputBlockSize: ""
    maxBatchTimeMs: 750
    # queuedMaxMessagesKbytes: ""
    # queuedMinMessages: ""
    noStrictOffsetReset: true # CHANGED

  genericMetricsDistributionConsumer:
    enabled: true
    replicas: 1
    env: []
    resources: {}
    affinity: {}
    nodeSelector: {}
    securityContext: {}
    topologySpreadConstraints: []
    containerSecurityContext: {}
    # tolerations: []
    # podLabels: {}
    autoOffsetReset: "latest" # CHANGED
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    # volumes: []
    # volumeMounts: []
    # maxBatchSize: ""
    # processes: ""
    # inputBlockSize: ""
    # outputBlockSize: ""
    maxBatchTimeMs: 750
    # queuedMaxMessagesKbytes: ""
    # queuedMinMessages: ""
    noStrictOffsetReset: true # CHANGED

  genericMetricsSetsConsumer:
    enabled: true
    replicas: 1
    env: []
    resources: {}
    affinity: {}
    nodeSelector: {}
    securityContext: {}
    topologySpreadConstraints: []
    containerSecurityContext: {}
    # tolerations: []
    # podLabels: {}
    autoOffsetReset: "latest" # CHANGED
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    # volumes: []
    # volumeMounts: []
    # maxBatchSize: ""
    # processes: ""
    # inputBlockSize: ""
    # outputBlockSize: ""
    maxBatchTimeMs: 750
    # queuedMaxMessagesKbytes: ""
    # queuedMinMessages: ""
    noStrictOffsetReset: true # CHANGED

  subscriptionConsumerMetrics:
    enabled: true
    replicas: 1
    env: []
    resources: {}
    #  requests:
    #    cpu: 200m
    #    memory: 500Mi
    affinity: {}
    nodeSelector: {}
    securityContext: {}
    topologySpreadConstraints: []
    containerSecurityContext: {}
    # tolerations: []
    # podLabels: {}
    autoOffsetReset: "latest" # CHANGED
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    # volumes: []
    # volumeMounts: []

  subscriptionConsumerTransactions:
    enabled: true
    replicas: 1
    env: []
    resources: {}
    #  requests:
    #    cpu: 200m
    #    memory: 500Mi
    affinity: {}
    nodeSelector: {}
    securityContext: {}
    topologySpreadConstraints: []
    containerSecurityContext: {}
    # tolerations: []
    # podLabels: {}
    # volumes: []
    # volumeMounts: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    autoOffsetReset: "latest" # CHANGED
    noStrictOffsetReset: true # CHANGED

  subscriptionConsumerSessions:
    replicas: 1
    env: []
    resources: {}
    affinity: {}
    nodeSelector: {}
    securityContext: {}
    topologySpreadConstraints: []
    containerSecurityContext: {}
    # tolerations: []
    # podLabels: {}
    # commitBatchSize: 1
    autoOffsetReset: "latest" # CHANGED
    sidecars: []
    volumes: []
    noStrictOffsetReset: true # CHANGED
    # volumeMounts: []

  replaysConsumer:
    enabled: true
    replicas: 1
    env: []
    resources: {}
    affinity: {}
    nodeSelector: {}
    securityContext: {}
    topologySpreadConstraints: []
    containerSecurityContext: {}
    # tolerations: []
    # podLabels: {}
    autoOffsetReset: "latest" # CHANGED
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    # maxBatchSize: ""
    # processes: ""
    # inputBlockSize: ""
    # outputBlockSize: ""
    maxBatchTimeMs: 750
    # queuedMaxMessagesKbytes: ""
    # queuedMinMessages: ""
    noStrictOffsetReset: true # CHANGED
    # volumeMounts:
    #   - mountPath: /dev/shm
    #     name: dshm
    # volumes:
    #   - name: dshm
    #     emptyDir:
    #       medium: Memory

  sessionsConsumer:
    replicas: 1
    env: []
    resources: {}
    affinity: {}
    nodeSelector: {}
    securityContext: {}
    topologySpreadConstraints: []
    containerSecurityContext: {}
    # tolerations: []
    # podLabels: {}
    autoOffsetReset: "latest" # CHANGED
    noStrictOffsetReset: true # CHANGED
    # maxBatchSize: ""
    # processes: ""
    # inputBlockSize: ""
    # outputBlockSize: ""
    # maxBatchTimeMs: ""
    # queuedMaxMessagesKbytes: ""
    # queuedMinMessages: ""
    # volumeMounts:
    #   - mountPath: /dev/shm
    #     name: dshm
    # volumes:
    #   - name: dshm
    #     emptyDir:
    #       medium: Memory

  transactionsConsumer:
    enabled: true
    replicas: 1
    env: []
    resources: {}
    affinity: {}
    nodeSelector: {}
    securityContext: {}
    topologySpreadConstraints: []
    containerSecurityContext: {}
    # tolerations: []
    # podLabels: {}
    autoOffsetReset: "latest" # CHANGED
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    # maxBatchSize: ""
    # processes: ""
    # inputBlockSize: ""
    # outputBlockSize: ""
    maxBatchTimeMs: 750
    # queuedMaxMessagesKbytes: ""
    # queuedMinMessages: ""
    noStrictOffsetReset: true # CHANGED
    # volumeMounts:
    #   - mountPath: /dev/shm
    #     name: dshm
    # volumes:
    #   - name: dshm
    #     emptyDir:
    #       medium: Memory

  profilingProfilesConsumer:
    replicas: 1
    env: []
    resources: {}
    affinity: {}
    nodeSelector: {}
    securityContext: {}
    topologySpreadConstraints: []
    containerSecurityContext: {}
    # tolerations: []
    # podLabels: {}
    autoOffsetReset: "latest" # CHANGED
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    # maxBatchSize: ""
    # processes: ""
    # inputBlockSize: ""
    # outputBlockSize: ""
    maxBatchTimeMs: 750
    # queuedMaxMessagesKbytes: ""
    # queuedMinMessages: ""
    noStrictOffsetReset: true # CHANGED
    # volumeMounts:
    #   - mountPath: /dev/shm
    #     name: dshm
    # volumes:
    #   - name: dshm
    #     emptyDir:
    #       medium: Memory

  profilingFunctionsConsumer:
    replicas: 1
    env: []
    resources: {}
    affinity: {}
    nodeSelector: {}
    securityContext: {}
    topologySpreadConstraints: []
    containerSecurityContext: {}
    # tolerations: []
    # podLabels: {}
    autoOffsetReset: "latest" # CHANGED
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    # maxBatchSize: ""
    # processes: ""
    # inputBlockSize: ""
    # outputBlockSize: ""
    maxBatchTimeMs: 750
    # queuedMaxMessagesKbytes: ""
    # queuedMinMessages: ""
    noStrictOffsetReset: true # CHANGED
    # volumeMounts:
    #   - mountPath: /dev/shm
    #     name: dshm
    # volumes:
    #   - name: dshm
    #     emptyDir:
    #       medium: Memory

  issueOccurrenceConsumer:
    enabled: true
    replicas: 1
    env: []
    resources: {}
    affinity: {}
    nodeSelector: {}
    securityContext: {}
    topologySpreadConstraints: []
    containerSecurityContext: {}
    # tolerations: []
    # podLabels: {}
    autoOffsetReset: "latest" # CHANGED
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    # maxBatchSize: ""
    # processes: ""
    # inputBlockSize: ""
    # outputBlockSize: ""
    maxBatchTimeMs: 750
    # queuedMaxMessagesKbytes: ""
    # queuedMinMessages: ""
    noStrictOffsetReset: true # CHANGED
    # volumeMounts:
    #   - mountPath: /dev/shm
    #     name: dshm
    # volumes:
    #   - name: dshm
    #     emptyDir:
    #       medium: Memory

  spansConsumer:
    enabled: true
    replicas: 1
    env: []
    resources: {}
    affinity: {}
    nodeSelector: {}
    securityContext: {}
    topologySpreadConstraints: []
    containerSecurityContext: {}
    # tolerations: []
    # podLabels: {}
    autoOffsetReset: "latest" # CHANGED
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    # maxBatchSize: ""
    # processes: ""
    # inputBlockSize: ""
    # outputBlockSize: ""
    maxBatchTimeMs: 750
    # queuedMaxMessagesKbytes: ""
    # queuedMinMessages: ""
    noStrictOffsetReset: true # CHANGED
    # volumeMounts:
    #   - mountPath: /dev/shm
    #     name: dshm
    # volumes:
    #   - name: dshm
    #     emptyDir:
    #       medium: Memory

  groupAttributesConsumer:
    enabled: true
    replicas: 1
    env: []
    resources: {}
    affinity: {}
    nodeSelector: {}
    securityContext: {}
    topologySpreadConstraints: []
    containerSecurityContext: {}
    # tolerations: []
    # podLabels: {}
    autoOffsetReset: "latest" # CHANGED
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    # maxBatchSize: ""
    # processes: ""
    # inputBlockSize: ""
    # outputBlockSize: ""
    maxBatchTimeMs: 750
    # queuedMaxMessagesKbytes: ""
    # queuedMinMessages: ""
    noStrictOffsetReset: true # CHANGED
    # volumeMounts:
    #   - mountPath: /dev/shm
    #     name: dshm
    # volumes:
    #   - name: dshm
    #     emptyDir:
    #       medium: Memory

  dbInitJob:
    env: []

  migrateJob:
    env: []

  clickhouse:
    maxConnections: 100

  rustConsumer: false

hooks:
  enabled: true
  preUpgrade: false
  removeOnSuccess: true
  activeDeadlineSeconds: 1200 # CHANGED
  shareProcessNamespace: false
  dbCheck:
    enabled: true
    image:
      # repository: subfuzion/netcat
      # tag: latest
      # pullPolicy: IfNotPresent
      imagePullSecrets: []
    env: []
    # podLabels: {}
    podAnnotations: {}
    resources:
      limits:
        memory: 64Mi
      requests:
        cpu: 100m
        memory: 64Mi
    affinity: {}
    nodeSelector: {}
    securityContext: {}
    containerSecurityContext: {}
    # tolerations: []
    # volumes: []
    # volumeMounts: []
  dbInit:
    enabled: true
    env: []
    # podLabels: {}
    podAnnotations: {}
    resources:
      limits:
        memory: 2048Mi
      requests:
        cpu: 300m
        memory: 2048Mi
    sidecars: []
    volumes: []
    affinity: {}
    nodeSelector: {}
    # tolerations: []
    # volumes: []
    # volumeMounts: []
  snubaInit:
    enabled: true
    # As snubaInit doesn't support configuring partition and replication factor, you can disable snubaInit's kafka topic creation by setting `kafka.enabled` to `false`,
    # and create the topics using `kafka.provisioning.topics` with the desired partition and replication factor.
    # Note that when you set `kafka.enabled` to `false`, snuba component might fail to start if newly added topics are not created by `kafka.provisioning`.
    kafka:
      enabled: true
    # podLabels: {}
    podAnnotations: {}
    resources:
      limits:
        cpu: 2000m
        memory: 1Gi
      requests:
        cpu: 700m
        memory: 1Gi
    affinity: {}
    nodeSelector: {}
    # tolerations: []
    # volumes: []
    # volumeMounts: []
  snubaMigrate:
    enabled: false # CHANGED
    # podLabels: {}
    # volumes: []
    # volumeMounts: []

system:
  ## be sure to include the scheme on the url, for example: "https://sentry.example.com"
  url: "https://sentry.domain.com" # CHANGED
  adminEmail: "[email protected]" # CHANGED
  ## This should only be used if you’re installing Sentry behind your company’s firewall.
  public: false
  ## This will generate one for you (it's must be given upon updates)
  # secretKey: "xx"

mail:
  # For example: smtp
  backend: smtp # CHANGED
  useTls: false
  useSsl: false
  username: ""
  password: ""
  # existingSecret: secret-name
  ## set existingSecretKey if key name inside existingSecret is different from 'mail-password'
  # existingSecretKey: secret-key-name
  port: 25
  host: "server.mail" # CHANGED
  from: "[email protected]" # CHANGED

symbolicator:
  enabled: true # CHANGED
  api:
    usedeployment: true  # Set true to use Deployment, false for StatefulSet
    persistence:
      enabled: true  # Set true for using PersistentVolumeClaim, false for emptyDir
      accessModes: ["ReadWriteOnce"]
      # storageClassName: standard
      size: "10Gi"
    replicas: 1
    env: []
    probeInitialDelaySeconds: 10
    resources: {}
    affinity: {}
    nodeSelector: {}
    securityContext: {}
    topologySpreadConstraints: []
    containerSecurityContext: {}
    # tolerations: []
    # podLabels: {}
    # priorityClassName: "xxx"
    config: |-
      # See: https://getsentry.github.io/symbolicator/#configuration
      cache_dir: "/data"
      bind: "0.0.0.0:3021"
      logging:
        level: "warn"
      metrics:
        statsd: null
        prefix: "symbolicator"
      sentry_dsn: null
      connect_to_reserved_ips: true
      # caches:
      #   downloaded:
      #     max_unused_for: 1w
      #     retry_misses_after: 5m
      #     retry_malformed_after: 5m
      #   derived:
      #     max_unused_for: 1w
      #     retry_misses_after: 5m
      #     retry_malformed_after: 5m
      #   diagnostics:
      #     retention: 1w

    # TODO autoscaling in not yet implemented
    autoscaling:
      enabled: false
      minReplicas: 2
      maxReplicas: 5
      targetCPUUtilizationPercentage: 50

    # volumes: []
    # volumeMounts: []

  # TODO The cleanup cronjob is not yet implemented
  cleanup:
    enabled: false
    # podLabels: {}
    # affinity: {}
    # env: []

auth:
  register: false # CHANGED

service:
  name: sentry
  type: ClusterIP
  externalPort: 9000
  annotations: {}
  # externalIPs:
  # - 192.168.0.1
  # loadBalancerSourceRanges: []

# https://github.com/settings/apps (Create a Github App)
github: {}
# github:
#   appId: "xxxx"
#   appName: MyAppName
#   clientId: "xxxxx"
#   clientSecret: "xxxxx"
#   privateKey: "-----BEGIN RSA PRIVATE KEY-----\nMIIEpA" !!!! Don't forget a trailing \n
#   webhookSecret:  "xxxxx"
#
#   Note: if you use `existingSecret`, all above `clientId`, `clientSecret`, `privateKey`, `webhookSecret`
#   params would be ignored, because chart will suppose that they are stored in `existingSecret`. So you
#   must define all required keys and set it at least to empty strings if they are not needed in `existingSecret`
#   secret (client-id, client-secret, webhook-secret, private-key)
#
#   existingSecret: "xxxxx"
#   existingSecretPrivateKeyKey: ""     # by default "private-key"
#   existingSecretWebhookSecretKey: ""  # by default "webhook-secret"
#   existingSecretClientIdKey: ""       # by default "client-id"
#   existingSecretClientSecretKey: ""   # by default "client-secret"
#
#   Reference -> https://docs.sentry.io/product/integrations/source-code-mgmt/github/

# https://developers.google.com/identity/sign-in/web/server-side-flow#step_1_create_a_client_id_and_client_secret
google: {}
# google:
#   clientId: ""
#   clientSecret: ""
#   existingSecret: ""
#   existingSecretClientIdKey: "" # by default "client-id"
#   existingSecretClientSecretKey: "" # by default "client-secret"

slack: {}
# slack:
#   clientId:
#   clientSecret:
#   signingSecret:
#   existingSecret:
#   Reference -> https://develop.sentry.dev/integrations/slack/

discord: {}
# discord:
#   applicationId:
#   publicKey:
#   clientSecret:
#   botToken:
#   existingSecret:
#   Reference -> https://develop.sentry.dev/integrations/discord/

openai:
   existingSecret: "sentry-openai" # CHANGED
   existingSecretKey: "api-token" # CHANGED

nginx:
  enabled: true
  containerPort: 8080
  existingServerBlockConfigmap: '{{ template "sentry.fullname" . }}'
  resources: {}
  replicaCount: 1
  service:
    type: ClusterIP
    ports:
      http: 80
  extraLocationSnippet: false
  customReadinessProbe:
    tcpSocket:
      port: http
    initialDelaySeconds: 5
    timeoutSeconds: 3
    periodSeconds: 5
    successThreshold: 1
    failureThreshold: 3
  # extraLocationSnippet: |
  #  location /admin {
  #    allow 1.2.3.4; # VPN network
  #    deny all;
  #    proxy_pass http://sentry;
  #  }
  # Use this to enable an extra service account
  # serviceAccount:
  #   create: false
  #   name: nginx
  metrics:
    serviceMonitor: {}

ingress:
  enabled: true
  # If you are using traefik ingress controller, switch this to 'traefik'
  # if you are using AWS ALB Ingress controller, switch this to 'aws-alb'
  # if you are using GKE Ingress controller, switch this to 'gke'
  regexPathStyle: nginx
  # ingressClassName: nginx
  # If you are using AWS ALB Ingress controller, switch to true if you want activate the http to https redirection.
  alb:
    httpRedirect: false
  # annotations:
  #   If you are using nginx ingress controller, please use at least those 2 annotations
  #   kubernetes.io/ingress.class: nginx
  #   nginx.ingress.kubernetes.io/use-regex: "true"
  #   https://github.com/getsentry/self-hosted/issues/1927
  #   nginx.ingress.kubernetes.io/proxy-buffers-number: "16"
  #   nginx.ingress.kubernetes.io/proxy-buffer-size: "32k"
  #
  # hostname:
  # additionalHostNames: []
  #
  # tls:
  # - secretName:
  #   hosts:

filestore:
  # Set to one of filesystem, gcs or s3 as supported by Sentry.
  backend: s3 # CHANGED

  filesystem:
    path: /var/lib/sentry/files

    ## Enable persistence using Persistent Volume Claims
    ## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
    ##
    persistence:
      enabled: true
      ## database data Persistent Volume Storage Class
      ## If defined, storageClassName: <storageClass>
      ## If set to "-", storageClassName: "", which disables dynamic provisioning
      ## If undefined (the default) or set to null, no storageClassName spec is
      ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
      ##   GKE, AWS & OpenStack)
      ##
      # storageClass: "-"
      accessMode: ReadWriteOnce
      size: 10Gi

      ## Whether to mount the persistent volume to the Sentry worker and
      ## cron deployments. This setting needs to be enabled for some advanced
      ## Sentry features, such as private source maps. If you disable this
      ## setting, the Sentry workers will not have access to artifacts you upload
      ## through the web deployment.
      ## Please note that you may need to change your accessMode to ReadWriteMany
      ## if you plan on having the web, worker and cron deployments run on
      ## different nodes.
      persistentWorkers: false

      ## If existingClaim is specified, no PVC will be created and this claim will
      ## be used
      existingClaim: ""

  gcs: {}
    ## Point this at a pre-configured secret containing a service account. The resulting
    ## secret will be mounted at /var/run/secrets/google
    # secretName:
    # credentialsFile: credentials.json
  # bucketName:

  ## Currently unconfigured and changing this has no impact on the template configuration.
  ## Note that you can use a secret with default references "s3-access-key-id" and "s3-secret-access-key".
  ## Otherwise, you can use custom secret references, or use plain text values.
  s3: {}
  #  signature_version:
    region_name: de
    default_acl: private
  #  existingSecret:
  #  accessKeyIdRef:
  #  secretAccessKeyRef:
  #  accessKey:
  #  secretKey:
  #  bucketName:
  #  endpointUrl:
  #  signature_version:
  #  region_name:
  #  default_acl:

config:
  # No YAML Extension Config Given
  configYml: {}
  sentryConfPy: |
    # No Python Extension Config Given
  snubaSettingsPy: |
    # No Python Extension Config Given
  relay: |
    # No YAML relay config given
  web:
    httpKeepalive: 15
    maxRequests: 100000
    maxRequestsDelta: 500
    maxWorkerLifetime: 86400

clickhouse:
  enabled: true
  clickhouse:
    replicas: "3" # CHANGED
    imageVersion: "21.8.13.6"
    configmap:
      remote_servers:
        internal_replication: true
        replica:
          backup:
            enabled: false
      zookeeper_servers:
        enabled: true
        config:
          - index: "clickhouse"
            hostTemplate: "{{ .Release.Name }}-zookeeper-clickhouse"
            port: "2181"
      users:
        enabled: false
        user:
          # the first user will be used if enabled
          - name: default
            config:
              password: ""
              networks:
                - ::/0
              profile: default
              quota: default

    persistentVolumeClaim:
      enabled: true
      dataPersistentVolume:
        enabled: true
        accessModes:
          - "ReadWriteOnce"
        storage: "30Gi"

  ## Use this to enable an extra service account
  # serviceAccount:
  #   annotations: {}
  #   enabled: false
  #   name: "sentry-clickhouse"
  #   automountServiceAccountToken: true

## This value is only used when clickhouse.enabled is set to false
##
externalClickhouse:
  ## Hostname or ip address of external clickhouse
  ##
  host: "clickhouse"
  tcpPort: 9000
  httpPort: 8123
  username: default
  password: ""
  database: default
  singleNode: true
  # existingSecret: secret-name
  ## set existingSecretKey if key name inside existingSecret is different from 'postgres-password'
  # existingSecretKey: secret-key-name
  ## Cluster name, can be found in config
  ## (https://clickhouse.tech/docs/en/operations/server-configuration-parameters/settings/#server-settings-remote-servers)
  ## or by executing `select * from system.clusters`
  ##
  # clusterName: test_shard_localhost

# Settings for Zookeeper.
# See https://github.com/bitnami/charts/tree/master/bitnami/zookeeper
zookeeper:
  enabled: true
  nameOverride: zookeeper-clickhouse
  replicaCount: 3 # CHANGED
  persistence: # CHANGED
    enabled: false # CHANGED
  autopurge: # CHANGED
    purgeInterval: 6 # CHANGED
    snapRetainCount: 3 # CHANGED

# Settings for Kafka.
# See https://github.com/bitnami/charts/tree/master/bitnami/kafka
kafka:
  enabled: true
  heapOpts: -Xmx4096m -Xms4096m # CHANGED
  provisioning:
    ## Increasing the replicationFactor enhances data reliability during Kafka pod failures by replicating data across multiple brokers.
    replicationFactor: 3 # CHANGED
    enabled: true
    # Topic list is based on files below.
    # - https://github.com/getsentry/snuba/blob/master/snuba/utils/streams/topics.py
    # - https://github.com/getsentry/self-hosted/blob/master/install/create-kafka-topics.sh#L6
    ## Default number of partitions for topics when unspecified
    ##
    # numPartitions: 1
    # Note that snuba component might fail if you set `hooks.snubaInit.kafka.enabled` to `false` and remove the topics from this default topic list.
    topics:
      - name: events
        ## Number of partitions for this topic
        # partitions: 1
        config:
          "message.timestamp.type": LogAppendTime
      - name: event-replacements
      - name: snuba-commit-log
        config:
          "cleanup.policy": "compact,delete"
          "min.compaction.lag.ms": "3600000"
      - name: cdc
      - name: transactions
        config:
          "message.timestamp.type": LogAppendTime
      - name: snuba-transactions-commit-log
        config:
          "cleanup.policy": "compact,delete"
          "min.compaction.lag.ms": "3600000"
      - name: snuba-metrics
        config:
          "message.timestamp.type": LogAppendTime
      - name: outcomes
      - name: outcomes-billing
      - name: ingest-sessions
      - name: snuba-sessions-commit-log
        config:
          "cleanup.policy": "compact,delete"
          "min.compaction.lag.ms": "3600000"
      - name: snuba-metrics-commit-log
        config:
          "cleanup.policy": "compact,delete"
          "min.compaction.lag.ms": "3600000"
      - name: scheduled-subscriptions-events
      - name: scheduled-subscriptions-transactions
      - name: scheduled-subscriptions-sessions
      - name: scheduled-subscriptions-metrics
      - name: scheduled-subscriptions-generic-metrics-sets
      - name: scheduled-subscriptions-generic-metrics-distributions
      - name: scheduled-subscriptions-generic-metrics-counters
      - name: events-subscription-results
      - name: transactions-subscription-results
      - name: sessions-subscription-results
      - name: metrics-subscription-results
      - name: generic-metrics-subscription-results
      - name: snuba-queries
        config:
          "message.timestamp.type": LogAppendTime
      - name: processed-profiles
        config:
          "message.timestamp.type": LogAppendTime
      - name: profiles-call-tree
      - name: ingest-replay-events
        config:
          "message.timestamp.type": LogAppendTime
          "max.message.bytes": "15000000"
      - name: snuba-generic-metrics
        config:
          "message.timestamp.type": LogAppendTime
      - name: snuba-generic-metrics-sets-commit-log
        config:
          "cleanup.policy": "compact,delete"
          "min.compaction.lag.ms": "3600000"
      - name: snuba-generic-metrics-distributions-commit-log
        config:
          "cleanup.policy": "compact,delete"
          "min.compaction.lag.ms": "3600000"
      - name: snuba-generic-metrics-counters-commit-log
        config:
          "cleanup.policy": "compact,delete"
          "min.compaction.lag.ms": "3600000"
      - name: generic-events
        config:
          "message.timestamp.type": LogAppendTime
      - name: snuba-generic-events-commit-log
        config:
          "cleanup.policy": "compact,delete"
          "min.compaction.lag.ms": "3600000"
      - name: group-attributes
        config:
          "message.timestamp.type": LogAppendTime
      - name: snuba-attribution
      - name: snuba-dead-letter-metrics
      - name: snuba-dead-letter-sessions
      - name: snuba-dead-letter-generic-metrics
      - name: snuba-dead-letter-replays
      - name: snuba-dead-letter-generic-events
      - name: snuba-dead-letter-querylog
      - name: snuba-dead-letter-group-attributes
      - name: ingest-attachments
      - name: ingest-transactions
      - name: ingest-events
        ## If the number of exceptions increases, it is recommended to increase the number of partitions for ingest-events
        # partitions: 1
      - name: ingest-replay-recordings
      - name: ingest-metrics
      - name: ingest-performance-metrics
      - name: ingest-monitors
      - name: profiles
      - name: ingest-occurrences
      - name: snuba-spans
      - name: shared-resources-usage
      - name: snuba-metrics-summaries
  listeners:
    client:
      protocol: "PLAINTEXT"
    controller:
      protocol: "PLAINTEXT"
    interbroker:
      protocol: "PLAINTEXT"
    external:
      protocol: "PLAINTEXT"
  zookeeper:
    enabled: false
  kraft:
    enabled: true
  controller:
    replicaCount: 3
    ## if the load on the kafka controller increases, resourcesPreset must be increased
    resourcesPreset: large # small, medium, large, xlarge, 2xlarge # CHANGED
    ## if the load on the kafka controller increases, persistence.size must be increased
    # persistence:
    #   size: 8Gi
    persistence: # CHANGED
      enabled: false # CHANGED
  broker: # CHANGED
    resourcesPreset: large # small, medium, large, xlarge, 2xlarge # CHANGED
    persistence: # CHANGED
      enabled: false # CHANGED


  ## Use this to enable an extra service account
  # serviceAccount:
  #   create: false
  #   name: kafka

  ## Use this to enable an extra service account
  # zookeeper:
  #   serviceAccount:
  #     create: false
  #     name: zookeeper

## This value is only used when kafka.enabled is set to false
##
externalKafka:
  ## Hostname or ip address of external kafka
  ##
  # host: "kafka-confluent"
  port: 9092

sourcemaps:
  enabled: false

redis:
  enabled: true
  replica:
    replicaCount: 0 # CHANGED
  auth:
    enabled: false
    sentinel: false
  ## Just omit the password field if your redis cluster doesn't use password
  # password: redis
    # existingSecret: secret-name
    ## set existingSecretPasswordKey if key name inside existingSecret is different from redis-password'
    # existingSecretPasswordKey: secret-key-name
  nameOverride: sentry-redis
  master:
    persistence:
      enabled: true
      size: 30Gi # CHANGED
  ## Use this to enable an extra service account
  # serviceAccount:
  #   create: false
  #   name: sentry-redis

## This value is only used when redis.enabled is set to false
##
externalRedis:
  ## Hostname or ip address of external redis cluster
  ##
  ## Just omit the password field if your redis cluster doesn't use password
  # password: redis
  # existingSecret: secret-name
  ## set existingSecretKey if key name inside existingSecret is different from redis-password'
  # existingSecretKey: secret-key-name
  ## Integer database number to use for redis (This is an integer)
  # db: 0
  ## Use ssl for the connection to Redis (True/False)
  # ssl: false

postgresql:
  enabled: true
  nameOverride: sentry-postgresql
  auth:
    database: sentry
  replication:
    enabled: false
    readReplicas: 2
    synchronousCommit: "on"
    numSynchronousReplicas: 1
    applicationName: sentry
  ## Use this to enable an extra service account
  # serviceAccount:
  #   enabled: false
  ## Default connection max age is 0 (unlimited connections)
  ## Set to a higher number to close connections after a period of time in seconds
  connMaxAge: 0
  ## If you are increasing the number of replicas, you need to increase max_connections
  # primary:
  #   extendedConfiguration: |
  #     max_connections=100
  ## When increasing the number of exceptions, you need to increase persistence.size
  # primary:
  #   persistence:
  #     size: 8Gi

## This value is only used when postgresql.enabled is set to false
## Set either externalPostgresql.password or externalPostgresql.existingSecret to configure password
externalPostgresql:
  # password: postgres
  existingSecret: sentry-postgresql # CHANGED
  # set existingSecretKeys in a secret, if not specified, value from the secret won't be used
  # if externalPostgresql.existingSecret is used, externalPostgresql.existingSecretKeys.password must be specified.
  existingSecretKeys: # CHANGED
     password: postgres-password # CHANGED
  #   username: username
  #   database: database
  #   port: port
  #   host: host
  database: sentry3 # CHANGED
  # sslMode: require
  ## Default connection max age is 0 (unlimited connections)
  ## Set to a higher number to close connections after a period of time in seconds
  connMaxAge: 0

rabbitmq:
  ## If disabled, Redis will be used instead as the broker.
  enabled: true
  vhost: /
  clustering:
    forceBoot: true
    rebalance: true
  replicaCount: 3 # CHANGED
  auth:
    erlangCookie: pHgpy3Q6adTskzAT6bLHCFqFTF7lMxhA
    username: guest
    password: guest
  nameOverride: ""

  pdb:
    create: true
  persistence:
    enabled: true
  resources: {}
  memoryHighWatermark: {}
    # enabled: true
    # type: relative
    # value: 0.4

  extraSecrets:
    load-definition:
      load_definition.json: |
        {
          "users": [
            {
              "name": "{{ .Values.auth.username }}",
              "password": "{{ .Values.auth.password }}",
              "tags": "administrator"
            }
          ],
          "permissions": [{
            "user": "{{ .Values.auth.username }}",
            "vhost": "/",
            "configure": ".*",
            "write": ".*",
            "read": ".*"
          }],
          "policies": [
            {
              "name": "ha-all",
              "pattern": ".*",
              "vhost": "/",
              "definition": {
                "ha-mode": "all",
                "ha-sync-mode": "automatic",
                "ha-sync-batch-size": 1
              }
            }
          ],
          "vhosts": [
            {
              "name": "/"
            }
          ]
        }
  loadDefinition:
    enabled: true
    existingSecret: load-definition
  extraConfiguration: |
    load_definitions = /app/load_definition.json
  ## Use this to enable an extra service account
  # serviceAccount:
  #   create: false
  #   name: rabbitmq

memcached:
  memoryLimit: "2048"
  maxItemSize: "26214400"
  args:
    - "memcached"
    - "-u memcached"
    - "-p 11211"
    - "-v"
    - "-m $(MEMCACHED_MEMORY_LIMIT)"
    - "-I $(MEMCACHED_MAX_ITEM_SIZE)"
  extraEnvVarsCM: "sentry-memcached"

## Prometheus Exporter / Metrics
##
metrics:
  enabled: false

  podAnnotations: {}

  ## Configure extra options for liveness and readiness probes
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes)
  livenessProbe:
    enabled: true
    initialDelaySeconds: 30
    periodSeconds: 5
    timeoutSeconds: 2
    failureThreshold: 3
    successThreshold: 1
  readinessProbe:
    enabled: true
    initialDelaySeconds: 30
    periodSeconds: 5
    timeoutSeconds: 2
    failureThreshold: 3
    successThreshold: 1

  ## Metrics exporter resource requests and limits
  ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
  resources: {}
  #   limits:
  #     cpu: 100m
  #     memory: 100Mi
  #   requests:
  #     cpu: 100m
  #     memory: 100Mi

  nodeSelector: {}
  tolerations: []
  affinity: {}
  securityContext: {}
  containerSecurityContext: {}
  # schedulerName:
  # Optional extra labels for pod, i.e. redis-client: "true"
  # podLabels: {}
  service:
    type: ClusterIP
    labels: {}

  image:
    repository: prom/statsd-exporter
    tag: v0.17.0
    pullPolicy: IfNotPresent

  # Enable this if you're using https://github.com/coreos/prometheus-operator
  serviceMonitor:
    enabled: false
    additionalLabels: {}
    namespace: ""
    namespaceSelector: {}
    # Default: scrape .Release.Namespace only
    # To scrape all, use the following:
    # namespaceSelector:
    #   any: true
    scrapeInterval: 30s
    # honorLabels: true
    relabelings: []
    metricRelabelings: []

revisionHistoryLimit: 10

# dnsPolicy: "ClusterFirst"
# dnsConfig:
#   nameservers: []
#   searches: []
#   options: []

extraManifests: []

@patsevanton
Copy link
Contributor

@sdernbach-ionos pay attention to these parameters

  workerEvents:
    ## If the number of exceptions increases, it is recommended to enable workerEvents
    enabled: false
    # annotations: {}
    queues: "events.save_event,post_process_errors"
    ## When increasing the number of exceptions and enabling workerEvents, it is recommended to increase the number of their replicas
    replicas: 10

kafka:
  enabled: true
  heapOpts: -Xmx4096m -Xms4096m
  provisioning:
    ## Increasing the replicationFactor enhances data reliability during Kafka pod failures by replicating data across multiple brokers.
    replicationFactor: 3
    topics:
      - name: ingest-events
        ## If the number of exceptions increases, it is recommended to increase the number of partitions for ingest-events
        # partitions: 10 # depends on count events
  controller:
    replicaCount: 3
    ## if the load on the kafka controller increases, resourcesPreset must be increased
    resourcesPreset: large
    ## if the load on the kafka controller increases, persistence.size must be increased
    persistence:
      size: 100Gi # depends on load
    persistence:
      enabled: true
  broker:
    persistence:
      enabled: true

You also have an old version of the helm chart and 3 replicas clickhouse

@sdernbach-ionos
Copy link

@patsevanton right we have not updated to the newest one as we wanted to stick with the 24.7.0 version of sentry and checking the issue first before doing the upgrade, do you think it should help? Then we can also do the update.

Also with clickhouse we took that from our old installation of another colleague as of our information also cause of the number of our events he increased it to 3 as it was not working with 1.

Do you think we should also directly enable workerTransactions ? As there are not only a lot of issues incoming but also lot of traffic on the websites.

Besides the workerEvents and partitions in kafka configuration it should match. But do you really think we have to enable the persistence? We turned it off to not run into space problems if it is too less ... so I think it would make it worse if we turn it on again.

@patsevanton
Copy link
Contributor

@sdernbach-ionos I think we should test it. we won't know without testing.

@lbcd
Copy link
Author

lbcd commented Nov 12, 2024

I still have se same issue since 15/09 (first message of this thread).

I only use values.yaml :

ingress:
  enabled: true
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/use-regex: "true"
    hostname:
    additionalHostNames:
      - sentry.xxxx.com
    tls:
    - hosts:
      - sentry.xxxx.com
      secretName: xxxx-com-wild-card-2024-2025

mail:
  # For example: smtp
  backend: smtp
  useTls: true
  useSsl: false
  username: "[email protected]"
  password: "xxxx"
  port: 587
  host: "outlook.office365.com"
  from: "[email protected]"

nginx:
  enabled: false

user:
  create: true
  email: [email protected]
  password: xxxxxx

system:
  url: "https://sentry.xxxx.com"

hooks:
  activeDeadlineSeconds: 2000

kafka:
  controller:
    persistence:
      enabled: false

kafka part is ignored => coalesce.go:237: warning: skipped value for kafka.config: Not a table.
I have to set ingress by myself
And the worst is that sentry become down after 1 week with same message that the first post of this thread.

@jacoor
Copy link

jacoor commented Nov 12, 2024

Yup, same issue here, none of the fix above worked for me. I tried downgrading, but have hard time running reverse migrations...

@fedeabih
Copy link

I experienced the same issue Failed to get watermark offsets: Local: Unknown partition described here. In my case, the root cause was having the replicationFactor set to 1, which made the high watermark offsets unreliable, especially during broker unavailability or minor instability.

I resolved this by increasing the replication factor to 3. The detailed explanation and fix are in this PR: sentry-kubernetes/charts#1606.

I hope this helps others facing a similar problem!

Additionally, you can keep persistence enabled (persistence: true) and optimize your Kafka configuration by following these recommendations. Make sure to adjust the values to suit your specific requirements.

kafka:
  enabled: true
  provisioning:
    replicationFactor: 3
  controller:
    extraConfig: |
      log.cleaner.enable=true
      log.cleanup.policy=delete
      log.retention.hours=
      log.retention.bytes=
      log.segment.bytes=
      log.retention.check.interval.ms=
      log.segment.delete.delay.ms=

@jacoor
Copy link

jacoor commented Nov 22, 2024

Thanks.
For the time issue is not fixed, I moved to AWS MSK. Works.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants