-
Notifications
You must be signed in to change notification settings - Fork 731
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory usage #310
Comments
Weird, I can't find any change that might cause increased memory usage between 3.1.3 and 4.0. I did update |
Any leads on this? |
Nope, memory usage is something we have been trying to tackle over in elastic/kibana but haven't had luck reproducing or fixing. It seems the only real progress was made when Node released updates and ambient memory usage drops just dropped. I'm not sure how to approach this and without solid evidence of an issue it's hard to dedicate time. |
I think that I am experiencing the same issue. We are running a relatively thin proxy layer between our mobile clients and elastic search. Each node of that layer is receiving around 300 requests per minute. Memory consumption went up about four-fold with just this little change. To confirm that elasticsearch-js is responsible, I forked the repo and fixed the Below are graphs of @spalger: I hope you'll consider this to be solid evidence. FTR, we use only three methods of this package: |
Also, @spalger: would you take a PR that patches 3.1.3 to have the |
Wow, that looks like pretty solid evidence @dylancwood ... I would love to try bisecting to identify where this memory usage spike came from, and I'll release a patch to 3.1 in the mean time. Thank you for the report |
👍 |
@dylancwood Thanks for going the extra mile on this! Nice :) |
@spalger Is this leaky behaviour still being investigated in the newer versions beyond 3.1.4? We are on node v4.4.5 and with any elasticsearch.js client above 4.x, the memory usage is far higher, leaking and behaving much differently as observed by @dylancwood. See below two charts. V3.1.4 client: V11.0.1 client: Both are with the following client configuration (hosts commented out) :
|
I have yet to organise our application memory stats as neatly as the other posters have done above but I do believe we are experiencing these exact same issues in our pre-production environment. Node 5.11.1 Currently writing only several entries per second to the ES Index but memory usage keeps accumulating steadily over time until we run out. Based on the information provided by the other posters I switched the ES client package from v11.01 to v3.1.4 yesterday. Using the same configuration provided below I can confirm that the v3.1.4 ES Client seems to maintain a steady memory usage of around 35-40MB, while the v11.01 ES Client is running around 140MB. This is after running for approx. 19hrs.
Below is the ES configuration we are using (reading from a Redis channel and writing to ES Index). Note the v3.1.4 ES Client does not support API version 2.3, which is the version of our ES Cluster so I commented it out.
As things stand we are not able to use the v11.01 ES Client in production. This means we need to use the old v3.1.4, which means the ES Cluster (2.3) and ES Client will have mismatched API capabilities. Regards, |
Thanks all for the updated info folks, keep it coming. |
More to think of here. I know the below scripts are not elastic client specific but its concise enough to show the behaviour of bluebird with promises returned as part of a loop vs native promises. This script will not leak (bluebird 3.1.1, node 4.4.7)
This will (using native promise library)
Can someone validate? Maybe something further to discuss in bluebird/node.js community? |
I have just realised that bluebird was not removed from the elastic client until version 10. The RSS leak I see is in v4 and above. So this issue could be something totally different (i.e in the lodash library or even in node.js itself). However RSS leaks in loops of promises (or rapidly called as in my use case) does appear to be a known issue with the native Promise library in the V8 engine. Unfortunately it does not look like it can be fixed as its a spec issue. [(https://github.com/nodejs/node/issues/6673)] General consensus from the internet is to use bluebird for server side which is far more performant for such a purpose, but use native promises for client side (for browser compatibility). I haven't tried switching the elastic client back to use bluebird promise library in the most recent versions yet (thats my next task), but like i said this could all be a red herring and nothing to do with bluebird/promises at all. However there are workarounds to the RSS leak that I should note here. These are a combination of the following settings (try one then the other/both).
The issue with doing this is that it is forcing a garbage collection outside what the V8 engine wants to do, and this will momentarily pause your application. Therefore test thoroughly. |
As a note I've experienced the same problem using v11 in AWS lambda with node 4.3. I've downgraded the client to 3.1.4 and it the issue seems to be fixed. |
It's been some time since I posted to this thread, but I would like to add to my previous comment that we have since moved from Node 5.11.1 to Node 4.3 and our environment is now in production using the 3.1.4 client. The issue persists using the v11 client. |
Unfortunately this comes down to node.js and lazy garbage collection of old space rather than the elasticsearch client behaviour. Have you used the node parameter I have this set in production in our application (node 4.3) and have had no memory issues with the most recent elasticsearch client (v12) since. |
I don't think this is the case. I just replaced the ES client without any node configuration or version change and the problem disappeared |
Yes I agree too that there is an issue if you look at the problem in using an old version of the client. I did spend a lot of time drilling down into the various libraries that were updated since v3 elastic client - however I found myself getting ever deeper with no solution or cause for the memory profile change in sight. |
@andrewstoker Well used node configuration commands might well provide relief in combination with the latest client, but according to what you said earlier it depends on use cases. I have not yet checked out what the impact of your suggestions are on my particular use case, it might be a solution of sorts. Having said that, for clients beyond v3.4.1 up to and including v11 to have such a negative memory side effect in a standard Node config and using a very simple (currently) low intensity production setup like my own does not seem quite right. Anyways, thanks for your research and suggestions. |
Hello! We have release the new JavaScript client! 🎉 |
I am noticing a big increase in memory usage between the releases 3.1.3 and 4.0.0 and above.
Our app is running on node 0.12.6 and we create our elasticsearch client with the following properties:
When running on 3.1.3 we converge around 350-400MB. When upgrading to 4.0.0 or any later release the memory just seems to grow until hitting the v8-limit (1,4GB).
The text was updated successfully, but these errors were encountered: