Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot set memory limits lower than 450MB #3

Open
Cryptophobia opened this issue Mar 13, 2018 · 2 comments
Open

Cannot set memory limits lower than 450MB #3

Cryptophobia opened this issue Mar 13, 2018 · 2 comments

Comments

@Cryptophobia
Copy link
Member

From @ineu on May 13, 2017 11:27

I have a pod consuming 115MB of RAM. I tried to set limits of 128MB, 256MB etc but the lowest one that worked was 450MB. Looks like the runner itself requires this amount of RAM, so the pod gets killed by the OOM killer before the application starts. I see the following dmesg:

[1655181.702032] [ pid ]   uid  tgid total_vm      rss nr_ptes nr_pmds swapents oom_score_adj name
[1655181.702176] [23241]  2000 23241     4497       69      14       3        0           984 bash
[1655181.702177] [23254]  2000 23254     4497       62      15       3        0           984 bash
[1655181.702179] [23255]  2000 23255   140260   102204     211       5        0           984 objstorage
[1655181.702185] Memory cgroup out of memory: Kill process 23255 (objstorage) score 1980 or sacrifice child
[1655181.702513] Killed process 23255 (objstorage) total-vm:561040kB, anon-rss:408816kB, file-rss:0kB

Not sure what the objstorage is, but it is pretty greedy.

UPD: docker-based pods are fine. Just set a limit for one of them to 16MB, works well.

Copied from original issue: deis/slugrunner#64

@Cryptophobia
Copy link
Member Author

From @Bregor on July 11, 2017 15:23

It's the fault of not slugrunner itself, but tar, unpacking slug before running.
On slugrunner start it downloads the "slug" (you see it in the end of slugbuilder log. Something like -----> Compiled slug size is 241M), unpack it to /app, change the directory and run corresponding command from Procfile or buildpack.

So the whole process looks like this:
slugrunner is starting, tar eats all the memory, oomkiller here, slugrunner is restarting, tar eats all the memory, …
you get the idea

@Cryptophobia
Copy link
Member Author

From @Bregor on July 11, 2017 15:27

We should find the way to limit tar memory usage to about 2/3 of .resourceFieldRef.resource.limits.memory

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant