-
-
Notifications
You must be signed in to change notification settings - Fork 12
Jambonz -- SIP POD is always in CrashLoopBackOff state #13
Comments
I think the issue is that the drachtio image in the sip daemonset needs to dynamically determine the ip address it should listen on. Most often, there is a private IP that the drachtio server binds to, as well as a public address assigned over the top. On cloud providers, like AWS, there is typically a metadata http request that can be made to determine the IPs. Since you are running on bare metal, we can not use this requests to automagically determine the IPs. Instead, could you define these environment variables for the drachtio container in the sbc-sip daemonset: LOCAL_IP |
If you look at the entrypoint.sh file in the drachtio container, I think you will see what I am getting at.. |
So ideally, I will be using the DownwardAPI for the local IP and for the PUBLIC_IP, I'll have to assign it manually.. Please correct me if I'm wrong here. I apologize for these botherations, but this is our project to deploy and we dont have any documentation except asking here. |
I think I can use both LOCAL_IP and PUBLIC_IP from the DownwardAPI of Kubernetes here is the link -- if you can suggest a better approach https://kubernetes.io/docs/concepts/workloads/pods/downward-api/
|
you should join my slack channel -- just go to https://joinslack.jambonz.org |
joining in |
Hi, I have successfully deployed the Jambonz on an on-prem K3s cluster. It has to be a brand new cluster to make things ready and there should be no already existing Ingress Controller installed to make things workable.
ef.com is my internal domain, so I can reach it internally.
now the main problems that I'm seeing
Helm Command that I used:
Added the HELM
HELM Install
1 -- SIP Pod is always in CreashLoopBackOff State.
2 -- logs for the pod and its subsequent containers are given below:
sbc-options-handler
SMPP
POd Describe
Any clue or comment will be much appreciate.
Also thanks for your YouTube video. It worked well for deploying the cluster.
The text was updated successfully, but these errors were encountered: