You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a pod consuming 115MB of RAM. I tried to set limits of 128MB, 256MB etc but the lowest one that worked was 450MB. Looks like the runner itself requires this amount of RAM, so the pod gets killed by the OOM killer before the application starts. I see the following dmesg:
It's the fault of not slugrunner itself, but tar, unpacking slug before running.
On slugrunner start it downloads the "slug" (you see it in the end of slugbuilder log. Something like -----> Compiled slug size is 241M), unpack it to /app, change the directory and run corresponding command from Procfile or buildpack.
So the whole process looks like this:
slugrunner is starting, tar eats all the memory, oomkiller here, slugrunner is restarting, tar eats all the memory, …
you get the idea
From @ineu on May 13, 2017 11:27
I have a pod consuming 115MB of RAM. I tried to set limits of 128MB, 256MB etc but the lowest one that worked was 450MB. Looks like the runner itself requires this amount of RAM, so the pod gets killed by the OOM killer before the application starts. I see the following dmesg:
Not sure what the objstorage is, but it is pretty greedy.
UPD: docker-based pods are fine. Just set a limit for one of them to 16MB, works well.
Copied from original issue: deis/slugrunner#64
The text was updated successfully, but these errors were encountered: