-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Getting caught exception "stoi" when trying to deploy everest-demo to Kubernetes #54
Comments
Update after asking this question on the LF Energy Zulip server: The cause has been found. Apparently, It's still unclear how that @hikinggrass FYI |
Manually adding the following snippet to - name: MQTT_SERVER_PORT
value: "1883" |
Added a warning for this case: |
@volkert-fastned all the docker-compose files here do set the everest-demo/docker-compose.ocpp201.yml Line 21 in 6ac3228
everest-demo/docker-compose.yml Line 19 in 6ac3228
The server runs at the default port, so we did not have to change the port. Do you recall why you set the |
Hi! 👋
I've tried deploying the
everest-demo
container images to a Kubernetes cluster (specifically Amazon EKS), and althoughmqtt-server
andnode-red
deployed successfully,manager
kept crashing with the following error:I wrote the following script to convert the Docker Compose file to a Helm chart for deployment to Kubernetes:
I ran this script, and then I ran the command that the script printed out in the end and the deployment initially appeared to be successful, until I noticed the pod belonging to the deployment
manager
constantly restarting and failing and eventually going intoCrashLoopBackOff
.A
kubectl -n everest logs [pod name]
yielded the "stoi" error that I mentioned further above.When I changed
spec.template.spec.containers[0].command
in the generatedmanager-deployment.yaml
file as below and then ran the samehelm upgrade
command again, I managed to get the pod to start successfully, so I could log into it and try some troubleshooting:When I ran the
helm upgrade
command again to apply this, themanager
pod started successfully, and then I could log into it as follows (note that you need to replace the[manager-pod-name]
part, because that changes every time the deployment is updated and a new pod is spun up):kubectl -n everest exec pod/[manager-pod-name] -it -- /bin/sh
In this console, it was easy to recreate the error:
I noticed that I could run
manager
with the--help
option just fine (I looked in inrun-sil-dc.sh
to see how I could run it) :...But whenever I would try any of the configurations in
/ext/source/config
, I would get that weirdstoi
(string to integer conversion?) exception, regardless of whether or not I included the--check
option:I took at look at the manager.cpp source code, but it wasn't very clear where exactly the exception was being thrown, because no other hints were being given other than the exception message
stoi
.It appears to be happening somewhere in the
int boot(...)
function, before the splash banner is printed withEVLOG_info
.Strangely enough, when I run the same Docker container image locally, I can't reproduce this issue:
The result in that case, on an Apple Silicon MacBook, running the Docker container in x86 emulation mode:
It's also an error, but at least a different one.
I'm a bit at a loss now.
Could you maybe help me getting this deployed to Kubernetes? (So far I've only tried AWS EKS, but I guess I could try this in a local minikube cluster or something too. Let me know if that would help.)
I also noticed that the
everest-demo/manager
container images are not yet multi-platform, but the test cluster in which I tried to deploy it has nodes running on an x86_64 architecture, so that shouldn't be the problem.Thank you kindly in advance with helping me getting this deployed to our test cluster for evaluation! 🙏
The text was updated successfully, but these errors were encountered: