-
Notifications
You must be signed in to change notification settings - Fork 191
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Container plex failed liveness probe, will be restarted #110
Comments
I'm using a TCP probe on port 32400. Although this works, I'm looking for something more precision for startup, liveness & readiness |
I'm also facing the same problem. From the pod's event, there's a message "Readiness probe failed: Get "http://10.1.221.248:32400/identity": dial tcp 10.1.221.248:32400: connect: connection refused". 10.1.221.248 is the pod's cluster IP. However, when i try calling the /identity URI through the LoadBalancer ip (http://:32400/identity), i gets a respond with the machine identifier. why then, the pod fails to read the identity through its own cluster IP? is there a way to fix this? |
Updates: I exec into the pod's shell and ran the following curl. as it turn out, the pod have no problem accessing the /identity URI. then what gives the error event? I left the whoami and the first curl here in case that meant something. whoamiroot curl http://10.1.221.248:32400<script>window.location = window.location.href.match(/(^.+\/)[^\/]*$/)[1] + 'web/index.html';</script><title>Unauthorized</title>401 Unauthorizedcurl http://10.1.221.248:32400/identity
|
I faced the same problem. It went better after I scalled up to a replica count of 2. |
@ianhundere : True, it corrupts the db, if you run plex on multiple nodes. I just run 2 instances on the same node. This works. |
Same issue here. My Workaround is to do the healthcheck with curl inside the container.
|
is there anyone who can help me with the following problem?
This is my first time working with kubernetes. and I run into the problem that my pod keeps restarting every time.
in error.txt I made a copy of kubectl describe pod
if someone can help me with this would be great
error.txt
Is it true that under service / plex-kube-plex there is no external ip?
kubectl get all -n plex
NAME READY STATUS RESTARTS AGE
pod / plex-kube-plex-589f45bb6c-42pqg 0/1 CrashLoopBackOff 7 14m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGE
service / plex-kube-plex ClusterIP 10.100.101.119 32400 / TCP, 80 / TCP, 443 / TCP 14m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps / plex-kube-plex 0/1 1 0 14m
NAME DESIRED CURRENT READY AGE
replicaset.apps / plex-kube-plex-589f45bb6c 1 1 0 14m
The text was updated successfully, but these errors were encountered: