Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replay Not Found #1440

Open
1 task done
OttoLi-De opened this issue Sep 10, 2024 · 62 comments
Open
1 task done

Replay Not Found #1440

OttoLi-De opened this issue Sep 10, 2024 · 62 comments

Comments

@OttoLi-De
Copy link

Issue submitter TODO list

  • I've searched for an already existing issues here

Describe the bug (actual behavior)

I'm not sure if it is a bug or caused by misconfigured. I deployed sentry to the AKS cluster using the sentry helm chart.
I managed to get the issue sent to the sentry instance, however, the replay does not show. I found the following errors in sentry-web.

Traceback (most recent call last):
  File "/usr/src/sentry/src/sentry/snuba/referrer.py", line 890, in validate_referrer
    raise Exception(error_message)
Exception: referrer replays.query.viewed_by_query is not part of Referrer Enum
01:20:21 [WARNING] sentry.snuba.referrer: referrer replays.query.viewed_by_query is not part of Referrer Enum
Traceback (most recent call last):
  File "/usr/src/sentry/src/sentry/snuba/referrer.py", line 890, in validate_referrer
    raise Exception(error_message)
Exception: referrer replays.endpoints.viewed_by_post is not part of Referrer Enum
01:20:21 [WARNING] sentry.snuba.referrer: referrer replays.endpoints.viewed_by_post is not part of Referrer Enum

Expected behavior

Replay can be play in Sentry portal

Your installation details

  1. values.yaml
sentry:
  ingestReplayRecordings:
    enabled: true
    volumes: []
snuba:
  replaysConsumer:
    enabled: true
nginx:
  enabled: true
  containerPort: 8080
  existingServerBlockConfigmap: '{{ template "sentry.fullname" . }}'
  resources: {}
  replicaCount: 1
  service:
    type: ClusterIP
    ports:
      http: 80
  extraLocationSnippet: false
ingress:
  enabled: true
  regexPathStyle: nginx
  annotations:
    nginx.ingress.kubernetes.io/affinity: "cookie"
  ingressClassName: nginx
  hostname:  sentry.xyz.com
  1. helm chart version: 24.5.0

Steps to reproduce

Trigger error in our website to send an issue with a replay to sentry. However issues in sentry portal shows replay not found.
image

Screenshots

No response

Logs

No response

Additional context

No response

@patsevanton
Copy link
Contributor

Hello! Try install without nginx:

nginx:
  enabled: false

@OttoLi-De
Copy link
Author

hi @patsevanton thanks for response, I tried to disabled the nginx but I got the same error

Traceback (most recent call last):
  File "/usr/src/sentry/src/sentry/snuba/referrer.py", line 890, in validate_referrer
    raise Exception(error_message)
Exception: referrer replays.query.details_query is not part of Referrer Enum
12:15:37 [WARNING] sentry.snuba.referrer: referrer replays.query.details_query is not part of Referrer Enum
12:15:38 [INFO] sentry.access.api: api.access (method='GET' view='sentry.replays.endpoints.organization_replay_details.OrganizationReplayDetailsEndpoint' response=200 user_id='1' is_app='False' token_type='None' is_frontend_request='True' organization_id='1' auth_id='None' path='/api/0/organizations/sentry/replays/8cbf9d64b96xxx/' caller_ip='20.11.x.x' user_agent='Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:130.0) Gecko/20100101 Firefox/130.0' rate_limited='False' rate_limit_category='None' request_duration_seconds=0.5768628120422363 rate_limit_type='DNE' concurrent_limit='None' concurrent_requests='None' reset_time='None' group='None' limit='None' remaining='None')
Traceback (most recent call last):
  File "/usr/src/sentry/src/sentry/snuba/referrer.py", line 890, in validate_referrer
    raise Exception(error_message)
Exception: referrer replays.query.viewed_by_query is not part of Referrer Enum
12:15:38 [WARNING] sentry.snuba.referrer: referrer replays.query.viewed_by_query is not part of Referrer Enum
12:15:38 [INFO] sentry.access.api: api.access (method='GET' view='sentry.replays.endpoints.project_replay_viewed_by.ProjectReplayViewedByEndpoint' response=200 user_id='1' is_app='False' token_type='None' is_frontend_request='True' organization_id='1' auth_id='None' path='/api/0/projects/sentry/production/replays/8cbf9d64b96xxx/viewed-by/' caller_ip='20.11.x.x' user_agent='Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:130.0) Gecko/20100101 Firefox/130.0' rate_limited='False' rate_limit_category='None' request_duration_seconds=0.317366361618042 rate_limit_type='DNE' concurrent_limit='None' concurrent_requests='None' reset_time='None' group='None' limit='None' remaining='None')
12:15:38 [INFO] sentry.access.api: api.access (method='GET' view='sentry.replays.endpoints.organization_replay_events_meta.OrganizationReplayEventsMetaEndpoint' response=200 user_id='1' is_app='False' token_type='None' is_frontend_request='True' organization_id='1' auth_id='None' path='/api/0/organizations/sentry/replays-events-meta/' caller_ip='20.11.x.x' user_agent='Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:130.0) Gecko/20100101 Firefox/130.0' rate_limited='False' rate_limit_category='None' request_duration_seconds=0.7086527347564697 rate_limit_type='DNE' concurrent_limit='None' concurrent_requests='None' reset_time='None' group='None' limit='None' remaining='None')
12:15:38 [INFO] sentry.access.api: api.access (method='GET' view='sentry.replays.endpoints.organization_replay_events_meta.OrganizationReplayEventsMetaEndpoint' response=200 user_id='1' is_app='False' token_type='None' is_frontend_request='True' organization_id='1' auth_id='None' path='/api/0/organizations/sentry/replays-events-meta/' caller_ip='20.11.x.x' user_agent='Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:130.0) Gecko/20100101 Firefox/130.0' rate_limited='False' rate_limit_category='None' request_duration_seconds=0.7205452919006348 rate_limit_type='DNE' concurrent_limit='None' concurrent_requests='None' reset_time='None' group='None' limit='None' remaining='None')
12:15:39 [INFO] sentry.access.api: api.access (method='GET' view='sentry.replays.endpoints.project_replay_recording_segment_index.ProjectReplayRecordingSegmentIndexEndpoint' response=200 user_id='1' is_app='False' token_type='None' is_frontend_request='True' organization_id='1' auth_id='None' path='/api/0/projects/sentry/production/replays/8cbf9d64b96xxx/recording-segments/' caller_ip='20.11.x.x' user_agent='Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:130.0) Gecko/20100101 Firefox/130.0' rate_limited='False' rate_limit_category='None' request_duration_seconds=0.7939949035644531 rate_limit_type='DNE' concurrent_limit='None' concurrent_requests='None' reset_time='None' group='None' limit='None' remaining='None')
12:15:39 [WARNING] root: Storage GET error.

@patsevanton
Copy link
Contributor

This is full values.yaml ?

@patsevanton
Copy link
Contributor

patsevanton commented Sep 10, 2024

Try this values:

user:
  password: "sentry-admin-password"
  create: true
  email: [email protected]
system:
  url: "http://sentry.xyz.com"
nginx:
  enabled: false
ingress:
  enabled: true
  hostname: sentry.xyz.com
  ingressClassName: "nginx"
  regexPathStyle: nginx
  annotations:
    nginx.ingress.kubernetes.io/proxy-body-size: 200m
    # https://github.com/getsentry/self-hosted/issues/1927
    nginx.ingress.kubernetes.io/proxy-buffers-number: "16"
    nginx.ingress.kubernetes.io/proxy-buffer-size: "32k"
    nginx.ingress.kubernetes.io/affinity: "cookie"

and command for install:

helm install -n test sentry sentry/sentry --wait --timeout=1000s --values values-test.yaml

@boindil
Copy link

boindil commented Sep 10, 2024

I installed sentry today and I have the exact same issue. The list shows that there should be replays and the data sent by the clients indicated that everything should be working. However I get the error "Replay Not Found" in the UI.

What I noticed: This chart is at sentry 24.5.1 - might there be an issue? Is there an easy way to upgrade to a newer Sentry version - this would be interesting in case of security issues as well ...

@OttoLi-De
Copy link
Author

Tried, still have the same error, here is the image version of different modules (which is by default in helm chart)

  1. sentry-web: getsentry/sentry:24.5.1
  2. sentry-relay: getsentry/relay:24.5.1
  3. sentry-ingest-replay-recordings: getsentry/sentry:24.5.1
  4. sentry-snuba-replays-consumer: getsentry/snuba:24.5.1
  5. sentry-snuba-api: getsentry/snuba:24.5.1
  6. sentry-worker: 24.5.1

posting full values.yaml here

prefix:

ipv6: false

user:
  create: true
  email: [email protected]
  password: aaaa
  
asHook: true

images:
  sentry:
    imagePullSecrets: []
  snuba:
    imagePullSecrets: []
  relay:
    imagePullSecrets: []
  symbolicator:
    imagePullSecrets: []
  vroom:
    imagePullSecrets: []

serviceAccount:
  enabled: false
  name: "sentry"
  automountServiceAccountToken: true

vroom:
  replicas: 1
  probeFailureThreshold: 5
  probeInitialDelaySeconds: 10
  probePeriodSeconds: 10
  probeSuccessThreshold: 1
  probeTimeoutSeconds: 2
  service:
    annotations: {}

  autoscaling:
    enabled: false
    minReplicas: 2
    maxReplicas: 5
    targetCPUUtilizationPercentage: 50
  init:
    

relay:
  enabled: true
  replicas: 1
  mode: managed
  probeFailureThreshold: 5
  probeInitialDelaySeconds: 10
  probePeriodSeconds: 10
  probeSuccessThreshold: 1
  probeTimeoutSeconds: 2
  service:
    annotations:
      service.beta.kubernetes.io/azure-load-balancer-internal: 'true'

  autoscaling:
    enabled: false
    minReplicas: 2
    maxReplicas: 5
    targetCPUUtilizationPercentage: 50
  init:
    

geodata:
  path: ""
  volumeName: ""
  mountPath: ""

sentry:
  singleOrganization: true
  web:
    enabled: true
    strategyType: RollingUpdate
    replicas: 1
    probeFailureThreshold: 5
    probeInitialDelaySeconds: 10
    probePeriodSeconds: 10
    probeSuccessThreshold: 1
    probeTimeoutSeconds: 2
    service:
      annotations:
        service.beta.kubernetes.io/azure-load-balancer-internal: 'true'

    autoscaling:
      enabled: false
      minReplicas: 2
      maxReplicas: 5
      targetCPUUtilizationPercentage: 50

  features:
    orgSubdomains: false
    vstsLimitedScopes: true
    enableProfiling: false
    enableSessionReplay: true
    enableFeedback: false
    enableSpan: true

  worker:
    enabled: true
    replicas: 3
    autoscaling:
      enabled: false
      minReplicas: 2
      maxReplicas: 5
      targetCPUUtilizationPercentage: 50
    livenessProbe:
      enabled: false
      periodSeconds: 60
      timeoutSeconds: 10
      failureThreshold: 3

  workerEvents:
    enabled: false
    queues: "events.save_event,post_process_errors"
    replicas: 1
    autoscaling:
      enabled: false
      minReplicas: 2
      maxReplicas: 5
      targetCPUUtilizationPercentage: 50
    livenessProbe:
      enabled: false
      periodSeconds: 60
      timeoutSeconds: 10
      failureThreshold: 3

  workerTransactions:
    enabled: false
    queues: "events.save_event_transaction,post_process_transactions"
    replicas: 1
    autoscaling:
      enabled: false
      minReplicas: 2
      maxReplicas: 5
      targetCPUUtilizationPercentage: 50
    livenessProbe:
      enabled: false
      periodSeconds: 60
      timeoutSeconds: 10
      failureThreshold: 3

  ingestConsumerAttachments:
    enabled: true
    replicas: 1
    autoscaling:
      enabled: false
      minReplicas: 1
      maxReplicas: 3
      targetCPUUtilizationPercentage: 50
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320

  ingestConsumerEvents:
    enabled: true
    replicas: 1
    autoscaling:
      enabled: false
      minReplicas: 1
      maxReplicas: 3
      targetCPUUtilizationPercentage: 50

    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
    
    volumeMounts:
      - mountPath: /dev/shm
        name: dshm
    volumes:
      - name: dshm
        emptyDir:
          medium: Memory

  ingestConsumerTransactions:
    enabled: true
    replicas: 1
    autoscaling:
      enabled: false
      minReplicas: 1
      maxReplicas: 3
      targetCPUUtilizationPercentage: 50

    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320

  ingestReplayRecordings:
    enabled: true
    replicas: 1
    autoscaling:
      enabled: false
      minReplicas: 1
      maxReplicas: 3
      targetCPUUtilizationPercentage: 50
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320

  ingestProfiles:
    replicas: 1
    autoscaling:
      enabled: false
      minReplicas: 1
      maxReplicas: 3
      targetCPUUtilizationPercentage: 50
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
  ingestOccurrences:
    enabled: true
    replicas: 1
    autoscaling:
      enabled: false
      minReplicas: 1
      maxReplicas: 3
      targetCPUUtilizationPercentage: 50
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320

  ingestMonitors:
    enabled: true
    replicas: 1
    autoscaling:
      enabled: false
      minReplicas: 1
      maxReplicas: 3
      targetCPUUtilizationPercentage: 50
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320

  billingMetricsConsumer:
    enabled: true
    replicas: 1
    autoscaling:
      enabled: false
      minReplicas: 1
      maxReplicas: 3
      targetCPUUtilizationPercentage: 50

    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320

  genericMetricsConsumer:
    enabled: true
    replicas: 1
    autoscaling:
      enabled: false
      minReplicas: 1
      maxReplicas: 3
      targetCPUUtilizationPercentage: 50

    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320

  metricsConsumer:
    enabled: true
    replicas: 1
    autoscaling:
      enabled: false
      minReplicas: 1
      maxReplicas: 3
      targetCPUUtilizationPercentage: 50

    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320

  cron:
    enabled: true
    replicas: 1

  subscriptionConsumerEvents:
    enabled: true
    replicas: 1
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320

  subscriptionConsumerSessions:
    replicas: 1

  subscriptionConsumerTransactions:
    enabled: true
    replicas: 1
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320


  postProcessForwardErrors:
    enabled: true
    replicas: 1
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320

  postProcessForwardTransactions:
    enabled: true
    replicas: 1
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
  postProcessForwardIssuePlatform:
    enabled: true
    replicas: 1
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320

  subscriptionConsumerGenericMetrics:
    enabled: true
    replicas: 1
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320

  subscriptionConsumerMetrics:
    enabled: true
    replicas: 1
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320

  cleanup:
    successfulJobsHistoryLimit: 5
    failedJobsHistoryLimit: 5
    activeDeadlineSeconds: 100
    concurrencyPolicy: Allow
    concurrency: 1
    enabled: true
    schedule: "0 0 * * *"
    days: 90
    logLevel: ''

snuba:
  api:
    enabled: true
    replicas: 1
    probeInitialDelaySeconds: 10
    liveness:
      timeoutSeconds: 2
    readiness:
      timeoutSeconds: 2
    autoscaling:
      enabled: false
      minReplicas: 2
      maxReplicas: 5
      targetCPUUtilizationPercentage: 50

  consumer:
    enabled: true
    replicas: 1
    autoOffsetReset: "earliest"
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320

  outcomesConsumer:
    enabled: true
    replicas: 1
    autoOffsetReset: "earliest"
    maxBatchSize: "3"
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320

  outcomesBillingConsumer:
    enabled: true
    replicas: 1
    autoOffsetReset: "earliest"
    maxBatchSize: "3"
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320

  replacer:
    enabled: true
    replicas: 1
    autoOffsetReset: "earliest"

  metricsConsumer:
    enabled: true
    replicas: 1
    autoOffsetReset: "earliest"
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320

  subscriptionConsumerEvents:
    enabled: true
    replicas: 1
    autoOffsetReset: "earliest"
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320

  genericMetricsCountersConsumer:
    enabled: true
    replicas: 1
    autoOffsetReset: "earliest"
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320

  genericMetricsDistributionConsumer:
    enabled: true
    replicas: 1
    autoOffsetReset: "earliest"
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320

  genericMetricsSetsConsumer:
    enabled: true
    replicas: 1
    autoOffsetReset: "earliest"
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320

  subscriptionConsumerMetrics:
    enabled: true
    replicas: 1
    autoOffsetReset: "earliest"
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320

  subscriptionConsumerTransactions:
    enabled: true
    replicas: 1
    autoOffsetReset: "earliest"
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320

  subscriptionConsumerSessions:
    replicas: 1
    autoOffsetReset: "earliest"

  replaysConsumer:
    enabled: true
    replicas: 1
    autoOffsetReset: "earliest"
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320

    volumeMounts:
      - mountPath: /dev/shm
        name: dshm
    volumes:
      - name: dshm
        emptyDir:
          medium: Memory

  sessionsConsumer:
    replicas: 1

  transactionsConsumer:
    enabled: true
    replicas: 1
    autoOffsetReset: "earliest"
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320

  profilingProfilesConsumer:
    replicas: 1
    autoOffsetReset: "earliest"
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320

  profilingFunctionsConsumer:
    replicas: 1
    autoOffsetReset: "earliest"
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320
  issueOccurrenceConsumer:
    enabled: true
    replicas: 1
    autoOffsetReset: "earliest"
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320

  spansConsumer:
    enabled: true
    replicas: 1
    autoOffsetReset: "earliest"
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320

  groupAttributesConsumer:
    enabled: true
    replicas: 1
    autoOffsetReset: "earliest"
    livenessProbe:
      enabled: true
      initialDelaySeconds: 5
      periodSeconds: 320

  dbInitJob:
    env: []

  migrateJob:
    env: []

  clickhouse:
    maxConnections: 100

  rustConsumer: false

hooks:
  enabled: true
  preUpgrade: false
  removeOnSuccess: true
  activeDeadlineSeconds: 1000
  shareProcessNamespace: false
  dbCheck:
    enabled: true
    resources:
      limits:
        memory: 64Mi
      requests:
        cpu: 100m
        memory: 64Mi
  dbInit:
    enabled: false
    resources:
      limits:
        memory: 2048Mi
      requests:
        cpu: 300m
        memory: 2048Mi
  snubaInit:
    enabled: true
    kafka:
      enabled: true
    resources:
      limits:
        cpu: 2000m
        memory: 1Gi
      requests:
        cpu: 700m
        memory: 1Gi
  snubaMigrate:
    enabled: true
	
system:
  url: "https://sentry.xyz.com"
  adminEmail: ""
  public: False

mail:
  backend: smtp
  useTls: false
  useSsl: false
  username: "xx"
  password: "xx"
  port: 
  host: "xx"
  from: "xx"

symbolicator:
  enabled: false
  api:
    replicas: 1
    probeInitialDelaySeconds: 10
    
    config: |-
      cache_dir: "/data"
      bind: "0.0.0.0:3021"
      logging:
        level: "warn"
      metrics:
        statsd: null
        prefix: "symbolicator"
      sentry_dsn: null
      connect_to_reserved_ips: true
    autoscaling:
      enabled: false
      minReplicas: 2
      maxReplicas: 5
      targetCPUUtilizationPercentage: 50

  cleanup:
    enabled: false

auth:
  register: true

service:
  name: sentry
  type: LoadBalancer
  externalPort: 80
  annotations:
    service.beta.kubernetes.io/azure-load-balancer-internal: 'true'

nginx:
  enabled: false
  containerPort: 8080
  existingServerBlockConfigmap: '{{ template "sentry.fullname" . }}'  
  replicaCount: 1
  service:
    type: ClusterIP
    ports:
      http: 80
  extraLocationSnippet: false

ingress:
  enabled: true
  regexPathStyle: nginx
  alb:
    httpRedirect: false
  annotations:
    nginx.ingress.kubernetes.io/proxy-body-size: 200m
    nginx.ingress.kubernetes.io/proxy-buffers-number: "16"
    nginx.ingress.kubernetes.io/proxy-buffer-size: "32k"
    nginx.ingress.kubernetes.io/affinity: "cookie"
  ingressClassName: nginx

filestore:
  backend: filesystem

  filesystem:
    path: /var/lib/sentry/files

    persistence:
      enabled: true
      storageClass: managed-premium
      accessMode: ReadWriteOnce
      size: 10Gi
      existingClaim: ""

config:
  configYml: {}
  sentryConfPy: |
    SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
    SESSION_COOKIE_SECURE = True
    CSRF_COOKIE_SECURE = True
    SOCIAL_AUTH_REDIRECT_IS_HTTPS = True    
    SENTRY_ALLOW_ORIGIN = '*'
  snubaSettingsPy: |
    # No Python Extension Config Given
  relay: |
    # No YAML relay config given
  web:
    httpKeepalive: 15
    maxRequests: 100000
    maxRequestsDelta: 500
    maxWorkerLifetime: 86400

clickhouse:
  enabled: true
  clickhouse:
    imageVersion: "21.8.13.6"
    configmap:
      remote_servers:
        internal_replication: true
        replica:
          backup:
            enabled: false
      zookeeper_servers:
        enabled: true
        config:
          - index: "clickhouse"
            hostTemplate: "{{ .Release.Name }}-zookeeper-clickhouse"
            port: "2181"
      users:
        enabled: false
        user:
          - name: default
            config:
              password: ""
              networks:
                - ::/0
              profile: default
              quota: default
    persistentVolumeClaim:
      enabled: true
      dataPersistentVolume:
        enabled: true
        accessModes:
          - "ReadWriteOnce"
        storage: "30Gi"

externalClickhouse:
  host: "clickhouse"
  tcpPort: 9000
  httpPort: 8123
  username: default
  password: ""
  database: default
  singleNode: true
  
zookeeper:
  enabled: true
  nameOverride: zookeeper-clickhouse
  replicaCount: 3

kafka:
  enabled: true
  provisioning:
    enabled: true
    topics:
      - name: events
        config:
          "message.timestamp.type": LogAppendTime
      - name: event-replacements
      - name: snuba-commit-log
        config:
          "cleanup.policy": "compact,delete"
          "min.compaction.lag.ms": "3600000"
      - name: cdc
      - name: transactions
        config:
          "message.timestamp.type": LogAppendTime
      - name: snuba-transactions-commit-log
        config:
          "cleanup.policy": "compact,delete"
          "min.compaction.lag.ms": "3600000"
      - name: snuba-metrics
        config:
          "message.timestamp.type": LogAppendTime
      - name: outcomes
      - name: outcomes-billing
      - name: ingest-sessions
      - name: snuba-sessions-commit-log
        config:
          "cleanup.policy": "compact,delete"
          "min.compaction.lag.ms": "3600000"
      - name: snuba-metrics-commit-log
        config:
          "cleanup.policy": "compact,delete"
          "min.compaction.lag.ms": "3600000"
      - name: scheduled-subscriptions-events
      - name: scheduled-subscriptions-transactions
      - name: scheduled-subscriptions-sessions
      - name: scheduled-subscriptions-metrics
      - name: scheduled-subscriptions-generic-metrics-sets
      - name: scheduled-subscriptions-generic-metrics-distributions
      - name: scheduled-subscriptions-generic-metrics-counters
      - name: events-subscription-results
      - name: transactions-subscription-results
      - name: sessions-subscription-results
      - name: metrics-subscription-results
      - name: generic-metrics-subscription-results
      - name: snuba-queries
        config:
          "message.timestamp.type": LogAppendTime
      - name: processed-profiles
        config:
          "message.timestamp.type": LogAppendTime
      - name: profiles-call-tree
      - name: ingest-replay-events
        config:
          "message.timestamp.type": LogAppendTime
          "max.message.bytes": "15000000"
      - name: snuba-generic-metrics
        config:
          "message.timestamp.type": LogAppendTime
      - name: snuba-generic-metrics-sets-commit-log
        config:
          "cleanup.policy": "compact,delete"
          "min.compaction.lag.ms": "3600000"
      - name: snuba-generic-metrics-distributions-commit-log
        config:
          "cleanup.policy": "compact,delete"
          "min.compaction.lag.ms": "3600000"
      - name: snuba-generic-metrics-counters-commit-log
        config:
          "cleanup.policy": "compact,delete"
          "min.compaction.lag.ms": "3600000"
      - name: generic-events
        config:
          "message.timestamp.type": LogAppendTime
      - name: snuba-generic-events-commit-log
        config:
          "cleanup.policy": "compact,delete"
          "min.compaction.lag.ms": "3600000"
      - name: group-attributes
        config:
          "message.timestamp.type": LogAppendTime
      - name: snuba-attribution
      - name: snuba-dead-letter-metrics
      - name: snuba-dead-letter-sessions
      - name: snuba-dead-letter-generic-metrics
      - name: snuba-dead-letter-replays
      - name: snuba-dead-letter-generic-events
      - name: snuba-dead-letter-querylog
      - name: snuba-dead-letter-group-attributes
      - name: ingest-attachments
      - name: ingest-transactions
      - name: ingest-events
      - name: ingest-replay-recordings
      - name: ingest-metrics
      - name: ingest-performance-metrics
      - name: ingest-monitors
      - name: profiles
      - name: ingest-occurrences
      - name: snuba-spans
      - name: shared-resources-usage
      - name: snuba-metrics-summaries
  listeners:
    client:
      protocol: "PLAINTEXT"
    controller:
      protocol: "PLAINTEXT"
    interbroker:
      protocol: "PLAINTEXT"
    external:
      protocol: "PLAINTEXT"
  zookeeper:
    enabled: false
  kraft:
    enabled: true
  controller:
    affinity: {}
    securityContext: {}
    containerSecurityContext: {}

externalKafka:
  port: 9092

sourcemaps:
  enabled: false

redis:
  enabled: true
  auth:
    enabled: false
    sentinel: false
  nameOverride: sentry-redis
  usePassword: false
  master:
    persistence:
      enabled: true

externalRedis:
  port: 6379

postgresql:
  enabled: false
  nameOverride: sentry-postgresql
  auth:
    database: sentry
  replication:
    enabled: false
    readReplicas: 2
    synchronousCommit: "on"
    numSynchronousReplicas: 1
    applicationName: sentry
  connMaxAge: 0

externalPostgresql:
  host: xxx
  port: 5432
  username: xx
  password: xx
  connMaxAge: 0

rabbitmq:
  enabled: true
  vhost: /
  clustering:
    forceBoot: true
    rebalance: true
  replicaCount: 3
  auth:
    erlangCookie: xxx
    username: xx
    password: xx
  nameOverride: ""

  pdb:
    create: true
  persistence:
    enabled: true
  
  memoryHighWatermark: {}

  extraSecrets:
    load-definition:
      load_definition.json: |
        {
          "users": [
            {
              "name": "{{ .Values.auth.username }}",
              "password": "{{ .Values.auth.password }}",
              "tags": "administrator"
            }
          ],
          "permissions": [{
            "user": "{{ .Values.auth.username }}",
            "vhost": "/",
            "configure": ".*",
            "write": ".*",
            "read": ".*"
          }],
          "policies": [
            {
              "name": "ha-all",
              "pattern": ".*",
              "vhost": "/",
              "definition": {
                "ha-mode": "all",
                "ha-sync-mode": "automatic",
                "ha-sync-batch-size": 1
              }
            }
          ],
          "vhosts": [
            {
              "name": "/"
            }
          ]
        }
  loadDefinition:
    enabled: true
    existingSecret: load-definition
  extraConfiguration: |
    load_definitions = /app/load_definition.json

memcached:
  memoryLimit: "2048"
  maxItemSize: "26214400"
  args:
    - "memcached"
    - "-u memcached"
    - "-p 11211"
    - "-v"
    - "-m $(MEMCACHED_MEMORY_LIMIT)"
    - "-I $(MEMCACHED_MAX_ITEM_SIZE)"
  extraEnvVarsCM: "sentry-memcached"

metrics:
  enabled: true
  livenessProbe:
    enabled: true
    initialDelaySeconds: 30
    periodSeconds: 5
    timeoutSeconds: 2
    failureThreshold: 3
    successThreshold: 1
  readinessProbe:
    enabled: true
    initialDelaySeconds: 30
    periodSeconds: 5
    timeoutSeconds: 2
    failureThreshold: 3
    successThreshold: 1

  resources:
    limits:
      cpu: 200m
      memory: 200Mi
    requests:
      cpu: 200m
      memory: 200Mi
  service:
    type: ClusterIP

  image:
    repository: prom/statsd-exporter
    tag: v0.17.0
    pullPolicy: IfNotPresent

  serviceMonitor:
    enabled: true
    namespace: "service"
    scrapeInterval: 30s
    
revisionHistoryLimit: 10

@patsevanton
Copy link
Contributor

patsevanton commented Sep 11, 2024

I do not confirm the error
My values.yaml

user:
  password: "sentry-admin-password"
  create: true
  email: [email protected]
system:
  url: "http://sentry.apatsev.org.ru"
nginx:
  enabled: false
ingress:
  enabled: true
  hostname: sentry.apatsev.org.ru
  ingressClassName: "nginx"
  regexPathStyle: nginx
  annotations:
    nginx.ingress.kubernetes.io/proxy-body-size: 200m
    # https://github.com/getsentry/self-hosted/issues/1927
    nginx.ingress.kubernetes.io/proxy-buffers-number: "16"
    nginx.ingress.kubernetes.io/proxy-buffer-size: "32k"

install command:

helm install sentry -n test   sentry/sentry --version 25.3.0 -f values-sentry-local.yaml --wait --timeout=1000s
coalesce.go:237: warning: skipped value for kafka.config: Not a table.
NAME: sentry
LAST DEPLOYED: Wed Sep 11 08:18:20 2024
NAMESPACE: test
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
* When running upgrades, make sure to give back the `system.secretKey` value.

kubectl -n test get configmap sentry-sentry -o json | grep -m1 -Po '(?<=system.secret-key: )[^\\]*'

image

Write the minimum values.yaml with the latest version of the helm chart and the installation command to reproduce the error

@boindil
Copy link

boindil commented Sep 11, 2024

I do not confirm the error My values.yaml

user:
  password: "sentry-admin-password"
  create: true
  email: [email protected]
system:
  url: "http://sentry.apatsev.org.ru"
nginx:
  enabled: false
ingress:
  enabled: true
  hostname: sentry.apatsev.org.ru
  ingressClassName: "nginx"
  regexPathStyle: nginx
  annotations:
    nginx.ingress.kubernetes.io/proxy-body-size: 200m
    # https://github.com/getsentry/self-hosted/issues/1927
    nginx.ingress.kubernetes.io/proxy-buffers-number: "16"
    nginx.ingress.kubernetes.io/proxy-buffer-size: "32k"

install command:

helm install sentry -n test   sentry/sentry --version 25.3.0 -f values-sentry-local.yaml --wait --timeout=1000s
coalesce.go:237: warning: skipped value for kafka.config: Not a table.
NAME: sentry
LAST DEPLOYED: Wed Sep 11 08:18:20 2024
NAMESPACE: test
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
* When running upgrades, make sure to give back the `system.secretKey` value.

kubectl -n test get configmap sentry-sentry -o json | grep -m1 -Po '(?<=system.secret-key: )[^\\]*'

Write the minimum values.yaml with the latest version of the helm chart and the installation command to reproduce the error

I think there might be a misunderstanding. The error occurs, when trying to view a replay of an issue (or directly via "replays" menu item). When there are no replays, no error occurs.

image

@patsevanton
Copy link
Contributor

patsevanton commented Sep 11, 2024

Get pod status

kubectl get pod -n namespace | grep replay

Try get log by https://github.com/stern/stern:

stern -n test -l role=ingest-replay-recordings

and

stern -n test -l role=snuba-replays-consumer

@OttoLi-De
Copy link
Author

@patsevanton here you are, thanks

kubectl get pod -n namespace | grep replay

sentry-ingest-replay-recordings-65d5566bf8-p9ccl                  1/1     Running            0                 17h
sentry-snuba-replays-consumer-7f4c979fcc-zr28t                    1/1     Running            0                 17h

stern -n test -l role=ingest-replay-recordings

sentry-ingest-replay-recordings-65d5566bf8-p9ccl › sentry-ingest-replay-recordings
sentry-ingest-replay-recordings-65d5566bf8-p9ccl sentry-ingest-replay-recordings 15:25:17 [INFO] arroyo.processing.processor: New partitions assigned: {Partition(topic=Topic(name='ingest-replay-recordings'), index=0): 37}
sentry-ingest-replay-recordings-65d5566bf8-p9ccl sentry-ingest-replay-recordings %6|1726001328.713|FAIL|rdkafka#consumer-1| [thrd:GroupCoordinator]: GroupCoordinator: sentry-kafka-controller-0.sentry-kafka-controller-headless.service.svc.cluster.local:9092: Disconnected (after 19414090ms in state UP)
sentry-ingest-replay-recordings-65d5566bf8-p9ccl sentry-ingest-replay-recordings %3|1726001328.715|FAIL|rdkafka#consumer-1| [thrd:GroupCoordinator]: GroupCoordinator: sentry-kafka-controller-0.sentry-kafka-controller-headless.service.svc.cluster.local:9092: Connect to ipv4#10.245.128.151:9092 failed: Connection refused (after 2ms in state CONNECT)
sentry-ingest-replay-recordings-65d5566bf8-p9ccl sentry-ingest-replay-recordings %3|1726001328.888|FAIL|rdkafka#consumer-1| [thrd:GroupCoordinator]: GroupCoordinator: sentry-kafka-controller-0.sentry-kafka-controller-headless.service.svc.cluster.local:9092: Connect to ipv4#10.245.128.151:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
sentry-ingest-replay-recordings-65d5566bf8-p9ccl sentry-ingest-replay-recordings %6|1726001362.756|FAIL|rdkafka#consumer-1| [thrd:sentry-kafka-controller-2.sentry-kafka-controller-headless.serv]: sentry-kafka-controller-2.sentry-kafka-controller-headless.service.svc.cluster.local:9092/2: Disconnected (after 19445125ms in state UP)
sentry-ingest-replay-recordings-65d5566bf8-p9ccl sentry-ingest-replay-recordings %6|1726001362.757|FAIL|rdkafka#consumer-1| [thrd:GroupCoordinator]: GroupCoordinator: sentry-kafka-controller-2.sentry-kafka-controller-headless.service.svc.cluster.local:9092: Disconnected (after 22897ms in state UP)
sentry-ingest-replay-recordings-65d5566bf8-p9ccl sentry-ingest-replay-recordings %3|1726001362.760|FAIL|rdkafka#consumer-1| [thrd:sentry-kafka-controller-2.sentry-kafka-controller-headless.serv]: sentry-kafka-controller-2.sentry-kafka-controller-headless.service.svc.cluster.local:9092/2: Connect to ipv4#10.245.128.30:9092 failed: Connection refused (after 3ms in state CONNECT)
sentry-ingest-replay-recordings-65d5566bf8-p9ccl sentry-ingest-replay-recordings %3|1726001362.760|FAIL|rdkafka#consumer-1| [thrd:GroupCoordinator]: GroupCoordinator: sentry-kafka-controller-2.sentry-kafka-controller-headless.service.svc.cluster.local:9092: Connect to ipv4#10.245.128.30:9092 failed: Connection refused (after 2ms in state CONNECT)
sentry-ingest-replay-recordings-65d5566bf8-p9ccl sentry-ingest-replay-recordings %3|1726001362.950|FAIL|rdkafka#consumer-1| [thrd:sentry-kafka-controller-2.sentry-kafka-controller-headless.serv]: sentry-kafka-controller-2.sentry-kafka-controller-headless.service.svc.cluster.local:9092/2: Connect to ipv4#10.245.128.30:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
sentry-ingest-replay-recordings-65d5566bf8-p9ccl sentry-ingest-replay-recordings %3|1726001362.993|FAIL|rdkafka#consumer-1| [thrd:GroupCoordinator]: GroupCoordinator: sentry-kafka-controller-2.sentry-kafka-controller-headless.service.svc.cluster.local:9092: Connect to ipv4#10.245.128.30:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
sentry-ingest-replay-recordings-65d5566bf8-p9ccl sentry-ingest-replay-recordings %6|1726009822.761|FAIL|rdkafka#consumer-1| [thrd:sentry-kafka-controller-2.sentry-kafka-controller-headless.serv]: sentry-kafka-controller-2.sentry-kafka-controller-headless.service.svc.cluster.local:9092/2: Disconnected (after 8435518ms in state UP)
sentry-ingest-replay-recordings-65d5566bf8-p9ccl sentry-ingest-replay-recordings %3|1726009822.768|FAIL|rdkafka#consumer-1| [thrd:sentry-kafka-controller-2.sentry-kafka-controller-headless.serv]: sentry-kafka-controller-2.sentry-kafka-controller-headless.service.svc.cluster.local:9092/2: Connect to ipv4#10.245.128.30:9092 failed: Connection refused (after 5ms in state CONNECT)
sentry-ingest-replay-recordings-65d5566bf8-p9ccl sentry-ingest-replay-recordings %3|1726009822.930|FAIL|rdkafka#consumer-1| [thrd:sentry-kafka-controller-2.sentry-kafka-controller-headless.serv]: sentry-kafka-controller-2.sentry-kafka-controller-headless.service.svc.cluster.local:9092/2: Connect to ipv4#10.245.128.30:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
sentry-ingest-replay-recordings-65d5566bf8-p9ccl sentry-ingest-replay-recordings %6|1726012228.748|FAIL|rdkafka#consumer-1| [thrd:GroupCoordinator]: GroupCoordinator: sentry-kafka-controller-0.sentry-kafka-controller-headless.service.svc.cluster.local:9092: Disconnected (after 10852447ms in state UP)
sentry-ingest-replay-recordings-65d5566bf8-p9ccl sentry-ingest-replay-recordings %3|1726012228.751|FAIL|rdkafka#consumer-1| [thrd:GroupCoordinator]: GroupCoordinator: sentry-kafka-controller-0.sentry-kafka-controller-headless.service.svc.cluster.local:9092: Connect to ipv4#10.245.128.151:9092 failed: Connection refused (after 3ms in state CONNECT)
sentry-ingest-replay-recordings-65d5566bf8-p9ccl sentry-ingest-replay-recordings %3|1726012228.963|FAIL|rdkafka#consumer-1| [thrd:GroupCoordinator]: GroupCoordinator: sentry-kafka-controller-0.sentry-kafka-controller-headless.service.svc.cluster.local:9092: Connect to ipv4#10.245.128.151:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
sentry-ingest-replay-recordings-65d5566bf8-p9ccl sentry-ingest-replay-recordings %6|1726022508.718|FAIL|rdkafka#consumer-1| [thrd:GroupCoordinator]: GroupCoordinator: sentry-kafka-controller-0.sentry-kafka-controller-headless.service.svc.cluster.local:9092: Disconnected (after 9966816ms in state UP)
sentry-ingest-replay-recordings-65d5566bf8-p9ccl sentry-ingest-replay-recordings %3|1726022508.720|FAIL|rdkafka#consumer-1| [thrd:GroupCoordinator]: GroupCoordinator: sentry-kafka-controller-0.sentry-kafka-controller-headless.service.svc.cluster.local:9092: Connect to ipv4#10.245.128.151:9092 failed: Connection refused (after 2ms in state CONNECT)
sentry-ingest-replay-recordings-65d5566bf8-p9ccl sentry-ingest-replay-recordings %3|1726022508.916|FAIL|rdkafka#consumer-1| [thrd:GroupCoordinator]: GroupCoordinator: sentry-kafka-controller-0.sentry-kafka-controller-headless.service.svc.cluster.local:9092: Connect to ipv4#10.245.128.151:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
sentry-ingest-replay-recordings-65d5566bf8-p9ccl sentry-ingest-replay-recordings %6|1726028668.726|FAIL|rdkafka#consumer-1| [thrd:GroupCoordinator]: GroupCoordinator: sentry-kafka-controller-0.sentry-kafka-controller-headless.service.svc.cluster.local:9092: Disconnected (after 5850900ms in state UP)
sentry-ingest-replay-recordings-65d5566bf8-p9ccl sentry-ingest-replay-recordings %3|1726028668.728|FAIL|rdkafka#consumer-1| [thrd:GroupCoordinator]: GroupCoordinator: sentry-kafka-controller-0.sentry-kafka-controller-headless.service.svc.cluster.local:9092: Connect to ipv4#10.245.128.151:9092 failed: Connection refused (after 2ms in state CONNECT)
sentry-ingest-replay-recordings-65d5566bf8-p9ccl sentry-ingest-replay-recordings %3|1726028668.934|FAIL|rdkafka#consumer-1| [thrd:GroupCoordinator]: GroupCoordinator: sentry-kafka-controller-0.sentry-kafka-controller-headless.service.svc.cluster.local:9092: Connect to ipv4#10.245.128.151:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
sentry-ingest-replay-recordings-65d5566bf8-p9ccl sentry-ingest-replay-recordings %6|1726037508.707|FAIL|rdkafka#consumer-1| [thrd:GroupCoordinator]: GroupCoordinator: sentry-kafka-controller-0.sentry-kafka-controller-headless.service.svc.cluster.local:9092: Disconnected (after 8529984ms in state UP)
sentry-ingest-replay-recordings-65d5566bf8-p9ccl sentry-ingest-replay-recordings %3|1726037508.710|FAIL|rdkafka#consumer-1| [thrd:GroupCoordinator]: GroupCoordinator: sentry-kafka-controller-0.sentry-kafka-controller-headless.service.svc.cluster.local:9092: Connect to ipv4#10.245.128.151:9092 failed: Connection refused (after 3ms in state CONNECT)
sentry-ingest-replay-recordings-65d5566bf8-p9ccl sentry-ingest-replay-recordings %3|1726037508.967|FAIL|rdkafka#consumer-1| [thrd:GroupCoordinator]: GroupCoordinator: sentry-kafka-controller-0.sentry-kafka-controller-headless.service.svc.cluster.local:9092: Connect to ipv4#10.245.128.151:9092 failed: Connection refused (after 1ms in state CONNECT, 1 identical error(s) suppressed)
sentry-ingest-replay-recordings-65d5566bf8-p9ccl sentry-ingest-replay-recordings %4|1726040509.751|SESSTMOUT|rdkafka#consumer-1| [thrd:main]: Consumer group session timed out (in join-state steady) after 30004 ms without a successful response from the group coordinator (broker 0, last error was Success): revoking assignment and rejoining group
sentry-ingest-replay-recordings-65d5566bf8-p9ccl sentry-ingest-replay-recordings 07:41:49 [INFO] arroyo.processing.processor: Partitions to revoke: [Partition(topic=Topic(name='ingest-replay-recordings'), index=0)]
sentry-ingest-replay-recordings-65d5566bf8-p9ccl sentry-ingest-replay-recordings 07:41:49 [INFO] arroyo.processing.processor: Closing <arroyo.processing.strategies.healthcheck.Healthcheck object at 0x7f453dd13610>...
sentry-ingest-replay-recordings-65d5566bf8-p9ccl sentry-ingest-replay-recordings 07:41:49 [INFO] arroyo.processing.processor: Waiting for <arroyo.processing.strategies.healthcheck.Healthcheck object at 0x7f453dd13610> to exit...
sentry-ingest-replay-recordings-65d5566bf8-p9ccl sentry-ingest-replay-recordings 07:41:49 [INFO] arroyo.processing.processor: <arroyo.processing.strategies.healthcheck.Healthcheck object at 0x7f453dd13610> exited successfully, releasing assignment.
sentry-ingest-replay-recordings-65d5566bf8-p9ccl sentry-ingest-replay-recordings 07:41:49 [INFO] arroyo.processing.processor: Partition revocation complete.
sentry-ingest-replay-recordings-65d5566bf8-p9ccl sentry-ingest-replay-recordings %5|1726040512.756|REQTMOUT|rdkafka#consumer-1| [thrd:GroupCoordinator]: GroupCoordinator/0: Timed out HeartbeatRequest in flight (after 30009ms, timeout #0)
sentry-ingest-replay-recordings-65d5566bf8-p9ccl sentry-ingest-replay-recordings %4|1726040512.756|REQTMOUT|rdkafka#consumer-1| [thrd:GroupCoordinator]: GroupCoordinator/0: Timed out 1 in-flight, 0 retry-queued, 0 out-queue, 0 partially-sent requests
sentry-ingest-replay-recordings-65d5566bf8-p9ccl sentry-ingest-replay-recordings %3|1726040512.756|FAIL|rdkafka#consumer-1| [thrd:GroupCoordinator]: GroupCoordinator: sentry-kafka-controller-0.sentry-kafka-controller-headless.service.svc.cluster.local:9092: 1 request(s) timed out: disconnect (after 2694449ms in state UP)
sentry-ingest-replay-recordings-65d5566bf8-p9ccl sentry-ingest-replay-recordings 07:41:59 [INFO] arroyo.processing.processor: New partitions assigned: {Partition(topic=Topic(name='ingest-replay-recordings'), index=0): 46}

stern -n test -l role=snuba-replays-consumer

sentry-snuba-replays-consumer-7f4c979fcc-zr28t › sentry-snuba
sentry-snuba-replays-consumer-7f4c979fcc-zr28t sentry-snuba 2024-09-10 15:24:41,698 Initializing Snuba...
sentry-snuba-replays-consumer-7f4c979fcc-zr28t sentry-snuba 2024-09-10 15:24:44,140 Snuba initialization took 2.431388855999103s
sentry-snuba-replays-consumer-7f4c979fcc-zr28t sentry-snuba 2024-09-10 15:24:44,170 Consumer Starting
sentry-snuba-replays-consumer-7f4c979fcc-zr28t sentry-snuba 2024-09-10 15:24:44,170 Checking Clickhouse connections...
sentry-snuba-replays-consumer-7f4c979fcc-zr28t sentry-snuba 2024-09-10 15:24:44,195 Successfully connected to Clickhouse: cluster_name=sentry-clickhouse
sentry-snuba-replays-consumer-7f4c979fcc-zr28t sentry-snuba 2024-09-10 15:24:44,197 librdkafka log level: 6
sentry-snuba-replays-consumer-7f4c979fcc-zr28t sentry-snuba 2024-09-10 15:24:47,268 New partitions assigned: {Partition(topic=Topic(name='ingest-replay-events'), index=0): 70}
sentry-snuba-replays-consumer-7f4c979fcc-zr28t sentry-snuba %6|1726001362.757|FAIL|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator: sentry-kafka-controller-2.sentry-kafka-controller-headless.service.svc.cluster.local:9092: Disconnected (after 19478506ms in state UP)
sentry-snuba-replays-consumer-7f4c979fcc-zr28t sentry-snuba 2024-09-10 20:49:22,758 Error callback from librdKafka -195, _TRANSPORT, GroupCoordinator: sentry-kafka-controller-2.sentry-kafka-controller-headless.service.svc.cluster.local:9092: Disconnected (after 19478506ms in state UP)
sentry-snuba-replays-consumer-7f4c979fcc-zr28t sentry-snuba %3|1726001362.763|FAIL|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator: sentry-kafka-controller-2.sentry-kafka-controller-headless.service.svc.cluster.local:9092: Connect to ipv4#10.245.128.30:9092 failed: Connection refused (after 5ms in state CONNECT)
sentry-snuba-replays-consumer-7f4c979fcc-zr28t sentry-snuba 2024-09-10 20:49:22,774 Error callback from librdKafka -195, _TRANSPORT, GroupCoordinator: sentry-kafka-controller-2.sentry-kafka-controller-headless.service.svc.cluster.local:9092: Connect to ipv4#10.245.128.30:9092 failed: Connection refused (after 5ms in state CONNECT)
sentry-snuba-replays-consumer-7f4c979fcc-zr28t sentry-snuba %3|1726001362.953|FAIL|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator: sentry-kafka-controller-2.sentry-kafka-controller-headless.service.svc.cluster.local:9092: Connect to ipv4#10.245.128.30:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
sentry-snuba-replays-consumer-7f4c979fcc-zr28t sentry-snuba 2024-09-10 20:49:22,953 Error callback from librdKafka -195, _TRANSPORT, GroupCoordinator: sentry-kafka-controller-2.sentry-kafka-controller-headless.service.svc.cluster.local:9092: Connect to ipv4#10.245.128.30:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
sentry-snuba-replays-consumer-7f4c979fcc-zr28t sentry-snuba %6|1726005470.863|FAIL|rdkafka#consumer-2| [thrd:sentry-kafka-controller-1.sentry-kafka-controller-headless.serv]: sentry-kafka-controller-1.sentry-kafka-controller-headless.service.svc.cluster.local:9092/1: Disconnected (after 23583589ms in state UP)
sentry-snuba-replays-consumer-7f4c979fcc-zr28t sentry-snuba 2024-09-10 21:57:50,864 Error callback from librdKafka -195, _TRANSPORT, sentry-kafka-controller-1.sentry-kafka-controller-headless.service.svc.cluster.local:9092/1: Disconnected (after 23583589ms in state UP)
sentry-snuba-replays-consumer-7f4c979fcc-zr28t sentry-snuba %3|1726005470.866|FAIL|rdkafka#consumer-2| [thrd:sentry-kafka-controller-1.sentry-kafka-controller-headless.serv]: sentry-kafka-controller-1.sentry-kafka-controller-headless.service.svc.cluster.local:9092/1: Connect to ipv4#10.245.128.106:9092 failed: Connection refused (after 2ms in state CONNECT)
sentry-snuba-replays-consumer-7f4c979fcc-zr28t sentry-snuba 2024-09-10 21:57:50,870 Error callback from librdKafka -195, _TRANSPORT, sentry-kafka-controller-1.sentry-kafka-controller-headless.service.svc.cluster.local:9092/1: Connect to ipv4#10.245.128.106:9092 failed: Connection refused (after 2ms in state CONNECT)
sentry-snuba-replays-consumer-7f4c979fcc-zr28t sentry-snuba %3|1726005471.087|FAIL|rdkafka#consumer-2| [thrd:sentry-kafka-controller-1.sentry-kafka-controller-headless.serv]: sentry-kafka-controller-1.sentry-kafka-controller-headless.service.svc.cluster.local:9092/1: Connect to ipv4#10.245.128.106:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
sentry-snuba-replays-consumer-7f4c979fcc-zr28t sentry-snuba 2024-09-10 21:57:51,087 Error callback from librdKafka -195, _TRANSPORT, sentry-kafka-controller-1.sentry-kafka-controller-headless.service.svc.cluster.local:9092/1: Connect to ipv4#10.245.128.106:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
sentry-snuba-replays-consumer-7f4c979fcc-zr28t sentry-snuba %6|1726009822.760|FAIL|rdkafka#producer-1| [thrd:sentry-kafka-controller-2.sentry-kafka-controller-headless.serv]: sentry-kafka-controller-2.sentry-kafka-controller-headless.service.svc.cluster.local:9092/2: Disconnected (after 4350948ms in state UP)
sentry-snuba-replays-consumer-7f4c979fcc-zr28t sentry-snuba %6|1726009822.761|FAIL|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator: sentry-kafka-controller-2.sentry-kafka-controller-headless.service.svc.cluster.local:9092: Disconnected (after 8155812ms in state UP)
sentry-snuba-replays-consumer-7f4c979fcc-zr28t sentry-snuba 2024-09-10 23:10:22,763 Error callback from librdKafka -195, _TRANSPORT, GroupCoordinator: sentry-kafka-controller-2.sentry-kafka-controller-headless.service.svc.cluster.local:9092: Disconnected (after 8155812ms in state UP)
sentry-snuba-replays-consumer-7f4c979fcc-zr28t sentry-snuba %3|1726009822.767|FAIL|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator: sentry-kafka-controller-2.sentry-kafka-controller-headless.service.svc.cluster.local:9092: Connect to ipv4#10.245.128.30:9092 failed: Connection refused (after 6ms in state CONNECT)
sentry-snuba-replays-consumer-7f4c979fcc-zr28t sentry-snuba 2024-09-10 23:10:22,770 Error callback from librdKafka -195, _TRANSPORT, GroupCoordinator: sentry-kafka-controller-2.sentry-kafka-controller-headless.service.svc.cluster.local:9092: Connect to ipv4#10.245.128.30:9092 failed: Connection refused (after 6ms in state CONNECT)
sentry-snuba-replays-consumer-7f4c979fcc-zr28t sentry-snuba %3|1726009822.926|FAIL|rdkafka#consumer-2| [thrd:GroupCoordinator]: GroupCoordinator: sentry-kafka-controller-2.sentry-kafka-controller-headless.service.svc.cluster.local:9092: Connect to ipv4#10.245.128.30:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
sentry-snuba-replays-consumer-7f4c979fcc-zr28t sentry-snuba 2024-09-10 23:10:22,926 Error callback from librdKafka -195, _TRANSPORT, GroupCoordinator: sentry-kafka-controller-2.sentry-kafka-controller-headless.service.svc.cluster.local:9092: Connect to ipv4#10.245.128.30:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
sentry-snuba-replays-consumer-7f4c979fcc-zr28t sentry-snuba %6|1726012228.746|FAIL|rdkafka#producer-1| [thrd:sentry-kafka-controller-0.sentry-kafka-controller-headless.serv]: sentry-kafka-controller-0.sentry-kafka-controller-headless.service.svc.cluster.local:9092/0: Disconnected (after 2405815ms in state UP)
sentry-snuba-replays-consumer-7f4c979fcc-zr28t sentry-snuba %6|1726026073.533|FAIL|rdkafka#consumer-2| [thrd:sentry-kafka-controller-1.sentry-kafka-controller-headless.serv]: sentry-kafka-controller-1.sentry-kafka-controller-headless.service.svc.cluster.local:9092/1: Disconnected (after 20586896ms in state UP)
sentry-snuba-replays-consumer-7f4c979fcc-zr28t sentry-snuba 2024-09-11 03:41:13,534 Error callback from librdKafka -195, _TRANSPORT, sentry-kafka-controller-1.sentry-kafka-controller-headless.service.svc.cluster.local:9092/1: Disconnected (after 20586896ms in state UP)
sentry-snuba-replays-consumer-7f4c979fcc-zr28t sentry-snuba %3|1726026073.537|FAIL|rdkafka#consumer-2| [thrd:sentry-kafka-controller-1.sentry-kafka-controller-headless.serv]: sentry-kafka-controller-1.sentry-kafka-controller-headless.service.svc.cluster.local:9092/1: Connect to ipv4#10.245.128.106:9092 failed: Connection refused (after 3ms in state CONNECT)
sentry-snuba-replays-consumer-7f4c979fcc-zr28t sentry-snuba 2024-09-11 03:41:13,541 Error callback from librdKafka -195, _TRANSPORT, sentry-kafka-controller-1.sentry-kafka-controller-headless.service.svc.cluster.local:9092/1: Connect to ipv4#10.245.128.106:9092 failed: Connection refused (after 3ms in state CONNECT)
sentry-snuba-replays-consumer-7f4c979fcc-zr28t sentry-snuba %3|1726026073.783|FAIL|rdkafka#consumer-2| [thrd:sentry-kafka-controller-1.sentry-kafka-controller-headless.serv]: sentry-kafka-controller-1.sentry-kafka-controller-headless.service.svc.cluster.local:9092/1: Connect to ipv4#10.245.128.106:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
sentry-snuba-replays-consumer-7f4c979fcc-zr28t sentry-snuba 2024-09-11 03:41:13,783 Error callback from librdKafka -195, _TRANSPORT, sentry-kafka-controller-1.sentry-kafka-controller-headless.service.svc.cluster.local:9092/1: Connect to ipv4#10.245.128.106:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
sentry-snuba-replays-consumer-7f4c979fcc-zr28t sentry-snuba %3|1726026074.341|FAIL|rdkafka#producer-1| [thrd:sentry-kafka-controller-1.sentry-kafka-controller-headless.serv]: sentry-kafka-controller-1.sentry-kafka-controller-headless.service.svc.cluster.local:9092/1: Connect to ipv4#10.245.128.106:9092 failed: Connection refused (after 2ms in state CONNECT)

@OttoLi-De
Copy link
Author

I installed sentry today and I have the exact same issue. The list shows that there should be replays and the data sent by the clients indicated that everything should be working. However I get the error "Replay Not Found" in the UI.

What I noticed: This chart is at sentry 24.5.1 - might there be an issue? Is there an easy way to upgrade to a newer Sentry version - this would be interesting in case of security issues as well ...

Seems I am not alone : )

@patsevanton
Copy link
Contributor

patsevanton commented Sep 11, 2024

I compare configmap between 23.12.1 and 25.3.0.
The configmap are identical.
Will there be a bug on older versions of sentry?
You need add activeDeadlineSeconds to values.yaml for old version:

hooks:
  activeDeadlineSeconds: 600

@boindil
Copy link

boindil commented Sep 11, 2024

I compare configmap between 23.12.1 and 25.3.0. The configmap are identical. Will there be a bug on older versions of sentry? You need add activeDeadlineSeconds to values.yaml for old version:

hooks:
  activeDeadlineSeconds: 600

already present in my yaml - using ChartVersion 25.5.0.

What I noticed when trying newer image versions with this chart:

  • 24.7.1 has problems with private CAs and tries to communicate via https public url
  • 24.8.0 is a bit different. kafka is now running without zookeeper - configuration change quite a bit and is not working anymore at all

@OttoLi-De
Copy link
Author

@patsevanton Not work. Deployed the change, and got the same error.
Seems those errors about Referrer Enum is the cause, and to me is either a bug or request is routing to a wrong module.

@patsevanton
Copy link
Contributor

@boindil does the chart work without errors up to version 24.7.1?
@OttoLi-De which version was tested on?

@boindil
Copy link

boindil commented Sep 11, 2024

@boindil does the chart work without errors up to version 24.7.1? @OttoLi-De which version was tested on?

whilst the chartVersion is 25.5.0, the images are version 24.5.1. The default combination does work fine - except for replays. It seems these are sent successfully by the client (HTTP/200 on all calls) but I cannot access these in the sentry UI.

I tried with newer image versions up to 24.8.0 with results as described (worse).

I created a trial account on sentry.io and only replace the dsn in sentry initialization of my web-app. That works fine.

@OttoLi-De
Copy link
Author

I am on chart version 25.3.0, and below is the image version of different modules (which is the default in helm chart)

  1. sentry-web: getsentry/sentry:24.5.1
  2. sentry-relay: getsentry/relay:24.5.1
  3. sentry-ingest-replay-recordings: getsentry/sentry:24.5.1
  4. sentry-snuba-replays-consumer: getsentry/snuba:24.5.1
  5. sentry-snuba-api: getsentry/snuba:24.5.1
  6. sentry-worker: 24.5.1

@patsevanton
Copy link
Contributor

patsevanton commented Sep 11, 2024

I am attempting to reproduce an error and have created a repository for this purpose:

https://github.com/patsevanton/sentry-react-replay

However, I am not receiving any replays. I am not a frontend developer, so I would greatly appreciate it if you could provide a minimal reproducible code example for creating and sending a replay to Sentry.

@boindil
Copy link

boindil commented Sep 11, 2024

I am attempting to reproduce an error and have created a repository for this purpose:

https://github.com/patsevanton/sentry-react-replay

However, I am not receiving any replays. I am not a frontend developer, so I would greatly appreciate it if you could provide a minimal reproducible code example for creating and sending a replay to Sentry.

Looks good to me at first glance. The DSN has to be replaced in the App.js - it is provided when you create a new project and select in your case react.

I‘ll clone your repo tomorrow and give it a shot.

@boindil
Copy link

boindil commented Sep 12, 2024

@patsevanton I created a PR on your repo. Only thing you have to do before starting the docker container is to enter your own DSN. Otherwise it will work (show a replay in sentry, but "doesnt find it" with the error we showed).

@boindil
Copy link

boindil commented Sep 12, 2024

Additional info: I noticed #1386 having a similar description. Even though clickhouse was the old version that was working there, I upgraded to altinity/clickhouse-server:23.3.19.33.altinitystable (since that one is used in the "original docker variant") via altinity/clickhouse-server:22.8.15.25.altinitystable as seen here: https://github.com/getsentry/self-hosted/blob/master/install/upgrade-clickhouse.sh

Not working either :(

@patsevanton
Copy link
Contributor

Install https://github.com/sentry-kubernetes/charts/releases/tag/sentry-v25.5.0

helm install sentry -n test   sentry/sentry -f values-sentry-local.yaml --wait --timeout=1000s

See replay
image

Press 10.130.0.3
See replay not found
image
I reproduced the error

@patsevanton
Copy link
Contributor

sentry-web-769b9cd6c5-45bcg sentry-web Traceback (most recent call last):
sentry-web-769b9cd6c5-45bcg sentry-web     raise Exception(error_message)
sentry-web-769b9cd6c5-45bcg sentry-web Exception: referrer replays.query.query_selector_index_count is not part of Referrer Enum
sentry-web-769b9cd6c5-45bcg sentry-web Traceback (most recent call last):
sentry-web-769b9cd6c5-45bcg sentry-web     raise Exception(error_message)
sentry-web-769b9cd6c5-45bcg sentry-web Exception: referrer replays.query.query_selector_index_count is not part of Referrer Enum
sentry-web-769b9cd6c5-45bcg sentry-web Traceback (most recent call last):
sentry-web-769b9cd6c5-45bcg sentry-web     raise Exception(error_message)
sentry-web-769b9cd6c5-45bcg sentry-web Exception: referrer replays.query.browse_scalar_conditions_subquery is not part of Referrer Enum
sentry-web-769b9cd6c5-45bcg sentry-web Traceback (most recent call last):
sentry-web-769b9cd6c5-45bcg sentry-web     raise Exception(error_message)
sentry-web-769b9cd6c5-45bcg sentry-web Exception: referrer replays.query.query_selector_index is not part of Referrer Enum
sentry-web-769b9cd6c5-45bcg sentry-web Traceback (most recent call last):
sentry-web-769b9cd6c5-45bcg sentry-web     raise Exception(error_message)
sentry-web-769b9cd6c5-45bcg sentry-web Exception: referrer replays.query.query_selector_index is not part of Referrer Enum
sentry-web-769b9cd6c5-45bcg sentry-web Traceback (most recent call last):
sentry-web-769b9cd6c5-45bcg sentry-web     raise Exception(error_message)
sentry-web-769b9cd6c5-45bcg sentry-web Exception: referrer replays.query.browse_query is not part of Referrer Enum
sentry-web-769b9cd6c5-45bcg sentry-web Traceback (most recent call last):
sentry-web-769b9cd6c5-45bcg sentry-web     raise Exception(error_message)
sentry-web-769b9cd6c5-45bcg sentry-web Exception: referrer replays.query.viewed_by_query is not part of Referrer Enum

@boindil
Copy link

boindil commented Sep 12, 2024

This seems rather strange to me. I would suppose sentry.io runs the same source-code as well, no? Its working there. I did not yet test the docker variant, since my laptop doesnt have enough RAM - will try on my desktop later.

The only reference of replays.query.query_selector_index_count that I can find is here however ... https://github.com/search?q=repo%3Agetsentry%2Fsentry+replays.query.query_selector_index_count&type=code

@patsevanton
Copy link
Contributor

the error was reproduced on version 23.5.2

@patsevanton
Copy link
Contributor

patsevanton commented Sep 13, 2024

The error was not reproduced on the latest version in Docker Compose.

create virtual machine in internet

$ sudo su -
$ apt update
$ apt install nginx git curl vim

get external ip address from command line linux

$ curl icanhazip.com

84.252.139.103 # My public ip for virtual machine

$ sudo vim /etc/nginx/sites-available/default

server {
    listen 80 default_server;
    server_name 84.252.139.103.sslip.io;
    location / {
        proxy_pass http://localhost:9000;
        client_max_body_size 200M;
    }
    error_log /var/log/nginx/error.log error;
    proxy_set_header Host $host;
}

$ sudo service nginx restart
$ curl -fsSL https://get.docker.com -o get-docker.sh
$ sh get-docker.sh
$ git clone https://github.com/getsentry/self-hosted.git
$ cd self-hosted/
change .env
$ cat .env | grep 24

...
SENTRY_IMAGE=getsentry/sentry:24.5.1
SNUBA_IMAGE=getsentry/snuba:24.5.1
RELAY_IMAGE=getsentry/relay:24.5.1
SYMBOLICATOR_IMAGE=getsentry/symbolicator:24.5.1
VROOM_IMAGE=getsentry/vroom:24.5.1
...

$ ./install.sh

$ cd sentry
$ nano config.yml

system.url-prefix: http://84.252.139.103.sslip.io

$ cd ..
$ docker compose up -d
$ docker compose logs -f

$ git clone https://github.com/patsevanton/sentry-react-replay.git
Change YOUR_SENTRY_DSN in src/App.js

Build a Docker image:

$ docker build -t sentry-example .

Launch the container:
$ docker run -p 80:80 sentry-example

In the browser, open http://localhost

Click on Throw error

@boindil
Copy link

boindil commented Sep 13, 2024

Well, thanks for trying with docker, I did not yet get back to my desktop PC ....

Could you check out the same sentry version (24.5.1)? getsentry/sentry@0e42a91

This way we would know for sure if it has something to do with the version of sentry or the kubernetes chart here having -some- kind of a problem.

@patsevanton
Copy link
Contributor

I will check sentry version (24.5.1) on docker-compose

@OttoLi-De
Copy link
Author

@patsevanton
What version in docker-compose have you tested? Thanks

@patsevanton
Copy link
Contributor

patsevanton commented Sep 13, 2024

In progress...

The instructions above

$ cd self-hosted/
change .env
$ cat .env | grep 24

...
SENTRY_IMAGE=getsentry/sentry:24.5.1
SNUBA_IMAGE=getsentry/snuba:24.5.1
RELAY_IMAGE=getsentry/relay:24.5.1
SYMBOLICATOR_IMAGE=getsentry/symbolicator:24.5.1
VROOM_IMAGE=getsentry/vroom:24.5.1
...

$ ./install.sh

i get error on version 24.5.1 and latest commit self-hosted

Try 'sentry upgrade --help' for help.

Error: No such option: --create-kafka-topics
Error in install/set-up-and-migrate-database.sh:36.
'$dcr web upgrade --create-kafka-topics' exited with status 2
-> ./install.sh:main:37
--> install/set-up-and-migrate-database.sh:source:36

install/error-handling.sh: line 82: /usr/bin/docker: Argument list too long

Try

git clone --depth 1 --branch 24.4.1 https://github.com/getsentry/self-hosted.git

@patsevanton
Copy link
Contributor

patsevanton commented Sep 13, 2024

In progress...
The instructions above

git clone --depth 1 --branch 24.4.1 https://github.com/getsentry/self-hosted.git

24.5.1 is not tagged - maybe just do git checkout 0e42a91dd908bcd0e42cc4cfe6097516d3d9bcc9 which is release 24.5.1

reference is not a tree:

git checkout 0e42a91dd908bcd0e42cc4cfe6097516d3d9bcc9
fatal: reference is not a tree: 0e42a91dd908bcd0e42cc4cfe6097516d3d9bcc9

may be ddd9cf25a4e501f00a76186394698f3eeaef2541

git checkout ddd9cf25a4e501f00a76186394698f3eeaef2541

@patsevanton
Copy link
Contributor

The error was not reproduced on the 24.5.1 version in Docker Compose.

create virtual machine in internet

$ sudo su -
$ apt update
$ apt install nginx git curl vim

get external ip address from command line linux

$ curl icanhazip.com

89.169.169.223 # My public ip for virtual machine

$ sudo vim /etc/nginx/sites-available/default

server {
    listen 80 default_server;
    server_name 89.169.169.223.sslip.io;
    location / {
        proxy_pass http://localhost:9000;
        client_max_body_size 200M;
    }
    error_log /var/log/nginx/error.log error;
    proxy_set_header Host $host;
}

$ sudo service nginx restart
$ curl -fsSL https://get.docker.com -o get-docker.sh
$ sh get-docker.sh
$ git clone https://github.com/getsentry/self-hosted.git
$ git checkout ddd9cf25a4e501f00a76186394698f3eeaef2541
$ cd self-hosted/
$ ./install.sh

$ cd sentry
$ nano config.yml

system.url-prefix: http://89.169.169.223.sslip.io

$ cd ..
$ docker compose up -d
$ docker compose logs -f

$ git clone https://github.com/patsevanton/sentry-react-replay.git
Change YOUR_SENTRY_DSN in src/App.js

Build a Docker image:

$ docker build -t sentry-example .

Launch the container:
$ docker run -p 80:80 sentry-example

In the browser, open http://localhost

Click on Throw error

image

@OttoLi-De
Copy link
Author

@patsevanton do you know which helm chart version works?

the current chart does not work with 24.5.1.

@boindil
Copy link

boindil commented Sep 16, 2024

welp ... then it definitely seems there is an issue with the kubernetes chart here :/

@patsevanton @Mokto any ideas on what could cause this behaviour?

@patsevanton
Copy link
Contributor

@patsevanton do you know which helm chart version works?

the current chart does not work with 24.5.1.

I don't know.

@OttoLi-De
Copy link
Author

@patsevanton
boindil is right, the chart is not working. How can we escalate the issues or do you know anyone we can seek helps from?

@patsevanton
Copy link
Contributor

patsevanton commented Sep 16, 2024

Unfortunately, I do not know who to contact. I'm waiting for a new release #1462 for more advanced logging for sentry-web. PR: #1459

@patsevanton
Copy link
Contributor

I don't see any extended errors in the extended logs of the new sentry release 25.8.0

sentry:
  web:
    logLevel: "DEBUG"

@boindil
Copy link

boindil commented Sep 17, 2024

what now? Can we be of any help?

@patsevanton
Copy link
Contributor

I don't have any ideas.

@OttoLi-De
Copy link
Author

@patsevanton
Found a new helm chart version 25.8.1 published 2days ago, and the app version bumped to 24.7.1. I tried to upgrade but that didn't work.
It feels like helm chart deployment does not support replay functions

@patsevanton
Copy link
Contributor

@OttoLi-De you're right. But I can't find a reason why help chart doesn't support replay

@boindil
Copy link

boindil commented Sep 19, 2024

new helm chart actually made it worse for me - I use sentry in a local cluster with its own CA. the current version doesnt seem to cooperate with that ...

getsentry/self-hosted#2950 (happening in web-container as well)

@patsevanton
Copy link
Contributor

Since which version has self-signed CA support stopped working?

@biclighter81
Copy link

Hi, we have the same issue with the kubernetes helm chart version 25.5.1 on our tanzu k8 cluster.

These are our value overrides:

sentry:
  user:
    create: true
    email: myemail
    existingSecret: "sentry-admin"
    existingSecretKey: "password"
  ingress:
    enabled: true
    regexPathStyle: traefik
    hostname: myhost

And this is the exception which occurs in the senty-web pod:
Exception: referrer replays.query.details_query is not part of Referrer Enum

If we find a workarround i will comment it down below.

@jeromeji
Copy link

I encountered the same issue. If you make any progress, please reply to me. Thank you.

@boindil
Copy link

boindil commented Sep 23, 2024

Since which version has self-signed CA support stopped working?

25.6.0

@patsevanton
Copy link
Contributor

Can you make a separate issue to discuss the broken self-signed CA support?

@biclighter81
Copy link

So I have digged down a litte bit deeper. I've updated our used helm chart to the latest version 25.9.0. After the update I don't get the Error message Exception: referrer replays.query.details_query is not part of Referrer Enum anymore but now im getting [WARNING] sentry.snuba.referrer: referrer replays.query.browse_query is not part of Referrer Enum and replays.query.browse_scalar_conditions_subquery. I think it has to do something with the version compatibility between all those different Deployments.

@biclighter81
Copy link

Hi as mentioned here:
getsentry/sentry#71570 (comment)

the Referrer validation errors are "just an annoying log" the main problem lies in the api request for the replay recording segments: /api/0/projects/<<company>>/<<project>>/replays/338b32dcd2964d95bfcff4038d13589a/recording-segments/?cursor=0%3A0%3A0&download=true&per_page=100. In my case I have 8 segments and the sentry-web pod logs 8 times: [WARNING] root: Storage GET error.

I think this error comes from the filestore configuration which is defined in the values.yaml like this:

filestore:
  # Set to one of filesystem, gcs or s3 as supported by Sentry.
  backend: filesystem

  filesystem:
    path: /var/lib/sentry/files

The configured folder is empty in my pod. Im not yet understanding what should be saved there because the relplays are stored in clickhouse as far is I understand. So why does it even try to read files from there when im loading the segments? I will investigate further...

Have a nice day

@biclighter81
Copy link

Good news guys. I finally got it working (sort of). The referrer is not part of the enum errors are indeed just warnings and do not influence the replay storing. It's the [WARNING] root: Storage GET error. log which "describes" the problem. For some reason the default configuration which uses a pvc for storing replay segments does not work. I switch the filestore configuration to s3 and now it's finally working. This is my values.yaml configuration:

user:
  create: true
  email: myemail
  existingSecret: "sentry-admin"
  existingSecretKey: "password"
system:
  url: myurl
ingress:
  enabled: true
  regexPathStyle: traefik
  hostname: myhostname
kafka:
  controller:
    replicaCount: 1
    resourcesPreset: "medium"
filestore:
  backend: s3
  s3:
    existingSecret: mys3secret
    accessKeyIdRef: accesskey
    secretAccessKeyRef: secretkey
    bucketName: sentry-filestore
    region_name: eu-west
    endpointUrl:  http://minio.ns.svc.cluster.local:9000/

postgresql:
  global:
    postgresql:
      auth:
        existingSecret: "sentry-sentry-postgresql"
        secretKeys:
          adminPasswordKey: "admin-password"
          userPasswordKey: "user-password"
          replicationPasswordKey: "replication-password"

For our company storing the replays in s3 is much better than on a pvc so I don't know if i will investigate the root problem with the filesystem default backend. But if someone doesn't want to use s3 as backend storage he or she knows where to look. I hope i was able to help someone.

Have a nice day,
Moritz

@OttoLi-De
Copy link
Author

OttoLi-De commented Oct 18, 2024

@biclighter81 thank you for your suggestion. I worked out the solution for our case as well.

the helm deployment method separates sentry into many services and replay requires few services pointing to the same file storage. To do that, we either need to use s3 or GCS, or when using filesystem, the persistence volume access mode needs to be set as ReadWriteMany

I deploy sentry on Azure Kubernetes Service, and to enable ReadWriteMany, I need to attach Azure File as the storage.
here is the documentation I followed to create PV and PVC with Azure File

finally I put the PVC name in existingClaim, and it works.

Below are the values I set

filestore:
  backend: filesystem

  filesystem:
    path: /var/lib/sentry/files
    persistence:
      enabled: true
      storageClass: ""
      accessMode: ReadWriteMany
      size: 100Gi
      persistentWorkers: true
      existingClaim: "sentry-data"

cc: @boindil @jeromeji

@patsevanton I think it will be helpful to add to documentation/comment/readme

@patsevanton
Copy link
Contributor

I'm thinking of thinking about this topic after the open pull request merge and clickhouse update. please tag me next week.

@danielehrhardt
Copy link

@patsevanton

@danielehrhardt
Copy link

@patsevanton same problem here

@patsevanton
Copy link
Contributor

patsevanton commented Nov 5, 2024

@danielehrhardt try

filestore:
  backend: filesystem

  filesystem:
    path: /var/lib/sentry/files
    persistence:
      enabled: true
      storageClass: ""
      accessMode: ReadWriteMany
      size: 100Gi
      persistentWorkers: true
      existingClaim: "sentry-data"

@patsevanton
Copy link
Contributor

@danielehrhardt did the config example help?

@rwojsznis
Copy link

Hey folks, stumbled this by accident :) aside of S3 suggestions mentioned above - in case you're still running into issues please do check if replay request in your browser doesn't die after ~5s

Relay's http timeout is 5s by default which might be not enough to load the session back to the client (browser) so you might want to set smtg like this in your helm values:

config: # config is top-level key
  http:
    timeout: 30

hope that will help someone 🤞

@danielehrhardt
Copy link

@danielehrhardt did the config example help?

Where to edit this config?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants