You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 7, 2018. It is now read-only.
Based on this understanding, does it means we have to run the elasticsearch pods in dedicated kubernetes nodes? Since the es pods would just keep crawling as much memory as it can, potentially causing memory and disk pressure on the kubelet, and causing other pods running on the same node to be evicted?
For example, if we have kubelet with 64 GB of memory, and for our elasticsearch pods, we set resources request and limit to 8 GB, and ES_HEAP_SIZE to be 3GB. Would lucene use up all remaining 60GB, or it would be using the remaining 5 GB based on the cgroup limit?
Thanks!
The text was updated successfully, but these errors were encountered:
IIRC Java heap limits should be enough. If you don't trust those, you can define pod resource limits and Kubernetes will kill the pod if it goes above the limits.
As far as i understand, lucene will use up as much as memory as it can from the operating system, which is referred to as off-heap native memory.
https://www.elastic.co/guide/en/elasticsearch/guide/current/heap-sizing.html
https://discuss.elastic.co/t/understanding-off-heap-usage/97176
https://stackoverflow.com/a/35232221
Based on this understanding, does it means we have to run the elasticsearch pods in dedicated kubernetes nodes? Since the es pods would just keep crawling as much memory as it can, potentially causing memory and disk pressure on the kubelet, and causing other pods running on the same node to be evicted?
For example, if we have kubelet with 64 GB of memory, and for our elasticsearch pods, we set resources request and limit to 8 GB, and ES_HEAP_SIZE to be 3GB. Would lucene use up all remaining 60GB, or it would be using the remaining 5 GB based on the cgroup limit?
Thanks!
The text was updated successfully, but these errors were encountered: