Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

snipe-it doesn't work out of box #127

Closed
bentatham opened this issue Jan 5, 2021 · 9 comments
Closed

snipe-it doesn't work out of box #127

bentatham opened this issue Jan 5, 2021 · 9 comments

Comments

@bentatham
Copy link

When doing a fresh installation of snipe-it, the readinessProbe fails.

I worked around this by doing some php magic to make the web page work at all :(https://stackoverflow.com/questions/55448836/from-laravel-i-got-failed-to-open-stream-permission-denied).

After that, snipe-it starts (at least, but then runs into other mysql issues).

@mschmidt291
Copy link
Contributor

@bentatham

Is your artisan key correctly generated like we describe in our README?

@bentatham
Copy link
Author

Yes, I am passing a generated key to helm via a values file:

...
config:
  snipeit:
    key: "base64:<REDACTED>="
...

@mschmidt291
Copy link
Contributor

@bentatham I just tested it in Minikube and for me it works. Can you try the new Chart version please?

@darkpixel
Copy link

darkpixel commented Jan 21, 2021

3.1.0 is failing to run out of the box for me too.

The readiness probe is failing and the container is spitting out the following:

AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 10.244.0.66. Set the 'ServerName' directive globally to suppress this message
2021-01-21 01:20:11,636 INFO success: exit_on_any_fatal entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2021-01-21 01:20:11,636 INFO success: apache entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2021-01-21 01:20:11,636 INFO success: run_schedule entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
No scheduled commands are ready to run.
2021-01-21 01:20:46,787 WARN received SIGTERM indicating exit request
2021-01-21 01:20:46,787 INFO waiting for apache, exit_on_any_fatal, run_schedule to die
2021-01-21 01:20:46,787 INFO stopped: run_schedule (terminated by SIGTERM)
2021-01-21 01:20:46,788 INFO stopped: apache (terminated by SIGTERM)
2021-01-21 01:20:46,797 INFO stopped: exit_on_any_fatal (terminated by SIGTERM)

@mschmidt291
Copy link
Contributor

@darkpixel Are you using the mysql 5.7.32 from our repository (or the default one the snipeit chart uses when not specifying a different mysql service)?

@darkpixel
Copy link

I'm using a managed MySQL instance at Digital Ocean.

mysql> select @@version;
+-----------+
| @@version |
+-----------+
| 8.0.19    |
+-----------+
1 row in set (0.03 sec)

mysql> 

@Puneeth-n
Copy link

Hi, I am having the same issue.

resource "helm_release" "snipe_it" {
  name         = local.snipe_it.name
  repository   = "https://storage.googleapis.com/t3n-helm-charts"
  chart        = "snipeit"
  version      = "3.1.0"
  namespace    = local.snipe_it.namespace
  lint         = false
  reset_values = true

  values = [
    yamlencode({
      image = {
        tag = "v5.0.12"
      }

      config = {
        snipeit = {
          key   = data.aws_ssm_parameter.snipe_it_key.value
          debug = true
        }
      }
      ingress = {
        enabled = true
        annotations = {
          "kubernetes.io/ingress.class"               = "alb"
          "alb.ingress.kubernetes.io/scheme"          = "internet-facing"
          "alb.ingress.kubernetes.io/listen-ports"    = jsonencode([{ HTTPS = 443 }])
          "alb.ingress.kubernetes.io/group.name"      = "infra-external"
          "alb.ingress.kubernetes.io/certificate-arn" = aws_acm_certificate.subdomain.arn
          "alb.ingress.kubernetes.io/target-type"     = "ip"
          "alb.ingress.kubernetes.io/success-codes"   = "200"
        }
        path = "/*"
        hosts = [
          local.snipe_it.fqdn
        ]
      }
    })
  ]
}
➜  infra git:(feature/snipe-it) ✗ kubectl -n nits logs -f pod/snipe-it-snipeit-f4cbc768c-kgjl4
Module ssl disabled.
To activate the new configuration, you need to run:
  service apache2 restart
2021-02-02 11:46:19,997 CRIT Supervisor running as root (no user in config file)
2021-02-02 11:46:20,004 INFO supervisord started with pid 1
2021-02-02 11:46:21,006 INFO spawned: 'exit_on_any_fatal' with pid 20
2021-02-02 11:46:21,008 INFO spawned: 'apache' with pid 21
2021-02-02 11:46:21,010 INFO spawned: 'run_schedule' with pid 22
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 10.31.0.108. Set the 'ServerName' directive globally to suppress this message
No scheduled commands are ready to run.
2021-02-02 11:46:22,399 INFO success: exit_on_any_fatal entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2021-02-02 11:46:22,399 INFO success: apache entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2021-02-02 11:46:22,399 INFO success: run_schedule entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2021-02-02 11:47:03,178 WARN received SIGTERM indicating exit request
2021-02-02 11:47:03,179 INFO waiting for apache, exit_on_any_fatal, run_schedule to die
2021-02-02 11:47:04,179 INFO stopped: run_schedule (terminated by SIGTERM)
2021-02-02 11:47:04,194 INFO stopped: apache (terminated by SIGTERM)
2021-02-02 11:47:04,196 INFO stopped: exit_on_any_fatal (terminated by SIGTERM)
➜  infra git:(feature/snipe-it) ✗ kubectl -n nits describe pod/snipe-it-snipeit-f4cbc768c-kgjl4
Name:         snipe-it-snipeit-f4cbc768c-kgjl4
Namespace:    nits
Priority:     0
Node:         ip-10-31-0-113.eu-west-1.compute.internal/10.31.0.113
Start Time:   Tue, 02 Feb 2021 17:10:14 +0530
Labels:       app.kubernetes.io/instance=snipe-it
              app.kubernetes.io/name=snipeit
              pod-template-hash=f4cbc768c
Annotations:  checksum/secret: 780e943861a7e823ec84fb5168224c3cf5a02893f2db111b186bc9fa248bc0af
              kubernetes.io/psp: eks.privileged
Status:       Running
IP:           10.31.0.108
IPs:
  IP:           10.31.0.108
Controlled By:  ReplicaSet/snipe-it-snipeit-f4cbc768c
Init Containers:
  config-data:
    Container ID:  docker://77121126c0d78cfe0d03c92a98fa4bb9d201887ef8cf9dfbb25436fb82c2dc6d
    Image:         busybox
    Image ID:      docker-pullable://busybox@sha256:e1488cb900233d035575f0a7787448cb1fa93bed0ccc0d4efc1963d7d72a8f17
    Port:          <none>
    Host Port:     <none>
    Command:
      sh
      -c
      find
      /var/www/html/storage/framework/sessions
      -not
      -user
      1000
      -exec
      chown 1000 {} \+
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 02 Feb 2021 17:10:19 +0530
      Finished:     Tue, 02 Feb 2021 17:10:21 +0530
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-ccc9s (ro)
      /var/www/html/storage/framework/sessions from data (rw,path="sessions")
Containers:
  snipe-it-snipeit:
    Container ID:   docker://e765bc755c0a7735b153f4148d02018a5bf8ba5bb204918e4155d75aa85ea5e7
    Image:          snipe/snipe-it:v5.0.12
    Image ID:       docker-pullable://snipe/snipe-it@sha256:40a32ff12444c686221306494d7d4e408880b9a669ef0f49167c5b7ff6b6f2b2
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 02 Feb 2021 17:16:19 +0530
      Finished:     Tue, 02 Feb 2021 17:17:04 +0530
    Ready:          False
    Restart Count:  6
    Liveness:       http-get http://:80/login delay=0s timeout=3s period=15s #success=1 #failure=3
    Readiness:      http-get http://:80/login delay=0s timeout=3s period=15s #success=1 #failure=3
    Environment Variables from:
      snipe-it-snipeit  Secret  Optional: false
    Environment:
      APP_ENV:       production
      APP_DEBUG:     true
      APP_URL:       http://example.local
      APP_TIMEZONE:  Europe/Berlin
      APP_LOCALE:    en
    Mounts:
      /var/lib/snipeit from data (rw,path="www")
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-ccc9s (ro)
      /var/www/html/storage/framework/sessions from data (rw,path="sessions")
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  snipe-it-snipeit
    ReadOnly:   false
  default-token-ccc9s:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-ccc9s
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                    From                                                Message
  ----     ------     ----                   ----                                                -------
  Normal   Scheduled  8m                     default-scheduler                                   Successfully assigned nits/snipe-it-snipeit-f4cbc768c-kgjl4 to ip-10-31-0-113.eu-west-1.compute.internal
  Normal   Pulling    7m56s                  kubelet, ip-10-31-0-113.eu-west-1.compute.internal  Pulling image "busybox"
  Normal   Pulled     7m55s                  kubelet, ip-10-31-0-113.eu-west-1.compute.internal  Successfully pulled image "busybox"
  Normal   Created    7m55s                  kubelet, ip-10-31-0-113.eu-west-1.compute.internal  Created container config-data
  Normal   Started    7m54s                  kubelet, ip-10-31-0-113.eu-west-1.compute.internal  Started container config-data
  Warning  Unhealthy  7m10s                  kubelet, ip-10-31-0-113.eu-west-1.compute.internal  Readiness probe failed: Get http://10.31.0.108:80/login: dial tcp 10.31.0.108:80: connect: connection refused
  Normal   Started    7m10s (x2 over 7m52s)  kubelet, ip-10-31-0-113.eu-west-1.compute.internal  Started container snipe-it-snipeit
  Warning  Unhealthy  6m40s (x4 over 7m40s)  kubelet, ip-10-31-0-113.eu-west-1.compute.internal  Readiness probe failed: HTTP probe failed with statuscode: 500
  Normal   Killing    6m26s (x2 over 7m11s)  kubelet, ip-10-31-0-113.eu-west-1.compute.internal  Container snipe-it-snipeit failed liveness probe, will be restarted
  Warning  Unhealthy  6m26s (x6 over 7m41s)  kubelet, ip-10-31-0-113.eu-west-1.compute.internal  Liveness probe failed: HTTP probe failed with statuscode: 500
  Normal   Created    6m25s (x3 over 7m52s)  kubelet, ip-10-31-0-113.eu-west-1.compute.internal  Created container snipe-it-snipeit
  Normal   Pulled     6m25s (x3 over 7m52s)  kubelet, ip-10-31-0-113.eu-west-1.compute.internal  Container image "snipe/snipe-it:v5.0.12" already present on machine
  Warning  BackOff    2m48s (x5 over 3m26s)  kubelet, ip-10-31-0-113.eu-west-1.compute.internal  Back-off restarting failed container

@Puneeth-n
Copy link

I downgraded to chart version 2.4.0 and it works out of the box. both with the companion mysql chart and AWS RDS Mysql 5.7

@mschmidt291
Copy link
Contributor

This issue should be fixed with #151

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants