Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kube-plex pod stuck in ContainerCreating status #103

Open
kriefsacha opened this issue May 4, 2020 · 15 comments
Open

Kube-plex pod stuck in ContainerCreating status #103

kriefsacha opened this issue May 4, 2020 · 15 comments

Comments

@kriefsacha
Copy link

kriefsacha commented May 4, 2020

Hi,

I'm quite new with this so sorry if i understood something wrong.

I'm on windows with an Ubuntu v20.04 VM.

I succeeded until i run the

helm install plex ./charts/kube-plex \
    --namespace plex \
    --set claimToken=[insert claim token here] \
    --set persistence.data.claimName=existing-pms-data-pvc \
    --set ingress.enabled=true

which succeeded but when i run

kubectl get po -n plex

I see the pod but in Pending status for 15 hours.

NAME                              READY   STATUS    RESTARTS   AGE
plex-kube-plex-78cfbcd7df-sfz6k   0/1     Pending   0          15h

I don't see what i'm missing if someone can help please.

Thank you !!

@kriefsacha
Copy link
Author

kriefsacha commented May 4, 2020

I have more info , i tried to do it on another namespace , same thing , i did the command:

kubectl describe pod plex -n media

This is the result i got:

Name: plex-kube-plex-768f9d5748-4k8qr
Namespace: media
Priority: 0
Node:
Labels: app=kube-plex
pod-template-hash=768f9d5748
release=plex
Annotations:
Status: Pending
IP:
IPs:
Controlled By: ReplicaSet/plex-kube-plex-768f9d5748
Init Containers:
kube-plex-install:
Image: quay.io/munnerz/kube-plex:latest
Port:
Host Port:
Command:
cp
/kube-plex
/shared/kube-plex
Environment:
Mounts:
/shared from shared (rw)
/var/run/secrets/kubernetes.io/serviceaccount from plex-kube-plex-token-sl5zk (ro)
Containers:
plex:
Image: plexinc/pms-docker:1.16.0.1226-7eb2c8f6f
Ports: 32400/TCP, 32400/TCP, 32443/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Liveness: http-get http://:32400/identity delay=10s timeout=10s period=10s #success=1 #failure=3
Readiness: http-get http://:32400/identity delay=15s timeout=5s period=10s #success=1 #failure=3
Environment:
TZ: Europe/London
PLEX_CLAIM: [claim-fqnYNzJtK9sk7oGytM8L]
PMS_INTERNAL_ADDRESS: http://plex-kube-plex:32400
PMS_IMAGE: plexinc/pms-docker:1.16.0.1226-7eb2c8f6f
KUBE_NAMESPACE: media (v1:metadata.namespace)
TRANSCODE_PVC: plex-kube-plex-transcode
DATA_PVC: existing-pms-data-pvc
CONFIG_PVC: plex-kube-plex-config
Mounts:
/config from config (rw)
/data from data (rw)
/shared from shared (rw)
/transcode from transcode (rw)
/var/run/secrets/kubernetes.io/serviceaccount from plex-kube-plex-token-sl5zk (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: existing-pms-data-pvc
ReadOnly: false
config:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: plex-kube-plex-config
ReadOnly: false
transcode:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit:
shared:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit:
plex-kube-plex-token-sl5zk:
Type: Secret (a volume populated by a Secret)
SecretName: plex-kube-plex-token-sl5zk
Optional: false
QoS Class: BestEffort
Node-Selectors: beta.kubernetes.io/arch=amd64
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message


Warning FailedScheduling 53s (x9 over 11m) default-scheduler persistentvolumeclaim "existing-pms-data-pvc" not found

I understood from this that it's because of --set persistence.data.claimName=existing-pms-data-pvc so i did another instance without it , still in pending. That it's the description:

Name: plex-test-kube-plex-597cb9c9bf-vvvrc
Namespace: media
Priority: 0
Node:
Labels: app=kube-plex
pod-template-hash=597cb9c9bf
release=plex-test
Annotations:
Status: Pending
IP:
IPs:
Controlled By: ReplicaSet/plex-test-kube-plex-597cb9c9bf
Init Containers:
kube-plex-install:
Image: quay.io/munnerz/kube-plex:latest
Port:
Host Port:
Command:
cp
/kube-plex
/shared/kube-plex
Environment:
Mounts:
/shared from shared (rw)
/var/run/secrets/kubernetes.io/serviceaccount from plex-test-kube-plex-token-x88fn (ro)
Containers:
plex:
Image: plexinc/pms-docker:1.16.0.1226-7eb2c8f6f
Ports: 32400/TCP, 32400/TCP, 32443/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Liveness: http-get http://:32400/identity delay=10s timeout=10s period=10s #success=1 #failure=3
Readiness: http-get http://:32400/identity delay=15s timeout=5s period=10s #success=1 #failure=3
Environment:
TZ: Europe/London
PLEX_CLAIM: [claim-ADeD9vmtpS3GK9XTYeCv]
PMS_INTERNAL_ADDRESS: http://plex-test-kube-plex:32400
PMS_IMAGE: plexinc/pms-docker:1.16.0.1226-7eb2c8f6f
KUBE_NAMESPACE: media (v1:metadata.namespace)
TRANSCODE_PVC: plex-test-kube-plex-transcode
DATA_PVC: plex-test-kube-plex-data
CONFIG_PVC: plex-test-kube-plex-config
Mounts:
/config from config (rw)
/data from data (rw)
/shared from shared (rw)
/transcode from transcode (rw)
/var/run/secrets/kubernetes.io/serviceaccount from plex-test-kube-plex-token-x88fn (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: plex-test-kube-plex-data
ReadOnly: false
config:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: plex-test-kube-plex-config
ReadOnly: false
transcode:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit:
shared:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit:
plex-test-kube-plex-token-x88fn:
Type: Secret (a volume populated by a Secret)
SecretName: plex-test-kube-plex-token-x88fn
Optional: false
QoS Class: BestEffort
Node-Selectors: beta.kubernetes.io/arch=amd64
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message


Warning FailedScheduling 38s (x3 over 2m3s) default-scheduler running "VolumeBinding" filter plugin for pod "plex-test-kube-plex-597cb9c9bf-vvvrc": pod has unbound immediate PersistentVolumeClaims

If that can help !

@kriefsacha
Copy link
Author

kriefsacha commented May 4, 2020

I succeeded to create a PV and PVC which i did with a new instance (again ..). Now it's on ContainterCreating status. When i did a command to get the events i get :

  Type     Reason       Age                  From               Message
  ----     ------       ----                 ----               -------
  Normal   Scheduled    43m                  default-scheduler  Successfully assigned media/plex-fourth-kube-plex-ff7759776-n6597 to ubuntu
  Warning  FailedMount  18m (x2 over 41m)    kubelet, ubuntu    Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[plex-fourth-kube-plex-token-fj4qr data config transcode shared]: timed out waiting for the condition
  Warning  FailedMount  11m (x4 over 32m)    kubelet, ubuntu    Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[shared plex-fourth-kube-plex-token-fj4qr data config transcode]: timed out waiting for the condition
  Warning  FailedMount  9m37s (x5 over 30m)  kubelet, ubuntu    Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[config transcode shared plex-fourth-kube-plex-token-fj4qr data]: timed out waiting for the condition
  Warning  FailedMount  2m49s (x6 over 39m)  kubelet, ubuntu    Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[data config transcode shared plex-fourth-kube-plex-token-fj4qr]: timed out waiting for the condition
  Warning  FailedMount  33s (x2 over 7m20s)  kubelet, ubuntu    Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[transcode shared plex-fourth-kube-plex-token-fj4qr data config]: timed out waiting for the condition

I followed the tutorial of the link : https://kauri.io/57-selfhost-your-media-center-on-kubernetes-with-p/8ec7c8c6bf4e4cc2a2ed563243998537/a#comments

And from there created the PV and PVC.. I don't understand anything.

Sorry for all the comments

@kriefsacha kriefsacha changed the title Kube-plex pod stuck in pending status Kube-plex pod stuck in ContainerCreating status May 4, 2020
@gwyden
Copy link

gwyden commented May 4, 2020

@kriefsacha if you do a kubectl get pv and a kubectl get pvc and do you have all pvcs atttached to pvs?

@kriefsacha
Copy link
Author

@gwyden if i do get pv i get :

NAME        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM             STORAGECLASS   REASON   AGE
media-ssd   200Gi        RWX            Retain           Bound    media/media-ssd   manual                  13m

if i do get pvc -n media i get :

NAME        STATUS   VOLUME      CAPACITY   ACCESS MODES   STORAGECLASS   AGE
media-ssd   Bound    media-ssd   200Gi        RWX            manual         13m

@gwyden
Copy link

gwyden commented May 4, 2020

@kriefsacha so i see you have a pv called media-ssd and a pvc called the same and the claim is bound but i do not see any other claims for the other volumes

Specifically this line leads me to think the pod is asking for more pvcs. Or even the PVC assigned to your data is not the same one that is being claimed for the pvc.

Warning FailedMount 33s (x2 over 7m20s) kubelet, ubuntu Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[transcode shared plex-fourth-kube-plex-token-fj4qr data config]: timed out waiting for the condition

For example here is a kubectl describe pod -n plex from my cluster

Volumes:
  tmp:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  plex-data-pvc
    ReadOnly:   false
  config:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  plex-kube-plex-config
    ReadOnly:   false
  transcode:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  plex-kube-plex-transcode
    ReadOnly:   false
  shared:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  plex-kube-plex-token-8lbmr:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  plex-kube-plex-token-8lbmr
    Optional:    false

I think it is just a matter of your storage is not allocating properly.

@kriefsacha
Copy link
Author

@gwyden thanks for your answer. I didn't know that it's needed a different pvc for the different properties. I'm gonna try that tomorrow. Do we need also multiple PV ? Or one PV and multiple pvc's it's enough

@kriefsacha
Copy link
Author

From what i saw it's not possible to bind multiple PVC's to the same PV so i need also multiple PV's ..

Maybe would be good to change that on the docs.

Going to try that just right now

@kriefsacha
Copy link
Author

kriefsacha commented May 5, 2020

Okay @gwyden i did three PV's and three PVC's. It's been great man thanks to you !! Now it's on status Running and Ready is 1/1.

But there is something weird , i created the pod with the command :

helm install kubeplex kube-plex/charts/kube-plex --set claimToken=claim-7aDCLfctrxHht9oaSBo9 --set persistence.data.claimName=data-pvc --set persistence.config.claimName=config-pvc --set persistence.transcode.claimName=transcode-pvc --set ingress.enabled=true

But on describe i see :

Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-pvc
ReadOnly: false
config:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: config-pvc
ReadOnly: false
transcode:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit:
shared:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit:
kubeplex-kube-plex-token-g29hl:
Type: Secret (a volume populated by a Secret)
SecretName: kubeplex-kube-plex-token-g29hl
Optional: false
QoS Class: BestEffort
Node-Selectors: beta.kubernetes.io/arch=amd64
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s

Which transcode is empty , weird ..

@hectorleiva
Copy link

@kriefsacha how's your plex instance working out after having added those PV/PVC's?

I ended up having to edit the deployment.yml so that instead of creating 3 separate PV/PVC's for config, transcode, and data - it would all point to the same PV/PVC. Mine has been working as expected since I made that change.

@jcsho
Copy link

jcsho commented May 23, 2020

hi @hectorleiva, could you explain if you changed it from the volume mounts or the env section?

@hectorleiva
Copy link

@justinhodev I basically modified the deployment.yml file so that the volumeMounts.name were all the same, pointing to my single volume mount and there's only one volume entry.

so volumeMounts looks like this for me:

volumeMounts:
  - name: media-ssd
    mountPath: /data
    subPath: plex/data
  - name: media-ssd
    mountPath: /config
    subPath: plex/config
  - name: media-ssd
    mountPath: /transcode
    subPath: plex/transcode
  - name: shared
    mountPath: /shared

and the Volumes looks like:

volumes:
  - name: media-ssd
    persistentVolumeClaim:
      claimName: "media-ssd"
  - name: shared
    emptyDir: {}

as a single entry.

This is what is running on my Raspberry Pi Cluster and it seems to work. Not sure how sustainable it is, but that's what I got.

@11jwolfe2
Copy link

11jwolfe2 commented Sep 30, 2020

Don't know if this is related or not but figured id post this here for help. So I am creating a pod with success (status=running, Ready = 1/1 log shows "Starting Plex Media Server") But when I go to my LoadBalancer IP:port, All I see is XML, no server set up page.

Solved: Did not realize claim codes were only valid for 4 minutes. I had other issues creating my pod. Solved those issues, had to update claim code.

@vitobotta
Copy link

@justinhodev I basically modified the deployment.yml file so that the volumeMounts.name were all the same, pointing to my single volume mount and there's only one volume entry.

so volumeMounts looks like this for me:

volumeMounts:
  - name: media-ssd
    mountPath: /data
    subPath: plex/data
  - name: media-ssd
    mountPath: /config
    subPath: plex/config
  - name: media-ssd
    mountPath: /transcode
    subPath: plex/transcode
  - name: shared
    mountPath: /shared

and the Volumes looks like:

volumes:
  - name: media-ssd
    persistentVolumeClaim:
      claimName: "media-ssd"
  - name: shared
    emptyDir: {}

as a single entry.

This is what is running on my Raspberry Pi Cluster and it seems to work. Not sure how sustainable it is, but that's what I got.

Hi, just wanted to thank you.... I spent hours trying to get my deployment to work with an nfs volume and only with your changes I was successful :)

@vitobotta
Copy link

@hectorleiva I celebrated too soon :D So I Plex now starts and I can see the media in my library, but transcoder pods get stuck for the same reason. What should I change for that to work? Thanks!

@hectorleiva
Copy link

@vitobotta I could never get the transcoder to work in my set-up because I was warned that the Raspberry Pis weren't powerful enough to do the transcoding work that would be required for Plex.

I don't even think I've gotten that to work on my Synology NAS DS720+ using the Plex app on there neither. This might be something you could avoid altogether if you are able to play the videos. I've had to manually transcode videos from whatever format to .mkv using Handbrake or ffmpeg and I've moved them over to the Plex and that's been my workflow for now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants