-
Notifications
You must be signed in to change notification settings - Fork 338
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Automatically release memory #542
Comments
Can you explain more? xmrig-proxy doesn't use much memory to begin with. |
As time goes by, the memory usage will increase. What I mean is if there is a setting function that automatically releases memory after reaching a certain memory usage |
How much does it increase over time? Is it constantly "leaking"? Then this is a bug, it shouldn't increase. |
Continuously increasing! |
I'll look into it next week then. It shouldn't constantly increase. Which xmrig-proxy version do you use? One of the release binaries, or do you compile it yourself? |
I did a quick test with https://github.com/google/sanitizers/wiki/AddressSanitizerLeakSanitizer and didn't find memory leaks. Do you use release binaries or do you compile xmrig-proxy yourself? |
Both binary and self compiled tests have been conducted, which means that the memory usage will gradually increase with the increase of usage time. The version is the latest version, and I have been using third-party memory tools to organize it. My suggestion is whether the software can add a memory organizing function. Set a memory usage, and if it exceeds this usage, it will automatically organize and use memory to release it |
"Memory organizing function" is not how C++ programs work. If memory usage is growing, it's a memory leak and it's a bug. Your OS can already automatically reduce memory used by programs (it's called swapping), no support from xmrig-proxy is needed. |
I am also facing similar issue on xmrig proxy. I am using latest binary. I have 128GB RAM Server. But xmrig proxy will kill itself after 2 to 3 days with following errors in dmesg -T
But this other high miner count proxy will continue eating up RAM and will get killed it self every 2 days. I have implemented a shell function to run it as soon as it is killed and also implemented a cron job to release memory of cache. When Memory usage is 70% it will drop cache. But I want to get to bottom of this. I also tried to add 3 servers in parallel to load balance the high miner count. but those 3 servers will also get xmrig-proxy out of memory every few days. Can you please support on this. |
Did you try values from this article https://dzone.com/articles/tcp-out-of-memory-consider-tuning-tcp-mem ?
And also Can you check if that crashing xmrig-proxy leaks open TCP sockets? You can use |
Thanks for prompt response @SChernykh . I have already added these lines in my sysctl conf. The lsof command just hangs on the server. probably due to high number of open files? :) |
in dmesg -T output. I see too many orphaned sockets output after xmrig is crashed.
|
Just to add. I have workers set to false already. But I have custom-diff set to 1000 and
Can this may have some impact due to high miner count? |
Difficulty 1000 is too low, you'll be getting too many submitted shares and too high network load. XMRig donation servers set difficulty 1,000,000 for a reason. |
Thanks. What should be optimal setting for custom diff if I have huge count of miners with different hardware CPU |
It doesn't matter if a single miner has low hashrate and can't submit shares every 30 seconds. What matters is the overall load on the proxy, so you can just set difficulty = your total hashrate and get 1 incoming share/second on average. |
Thanks for the insight. I have changed it to
I'll monitor for crash logs again. The Hardware of Proxy is top noch. Intel(R) Xeon(R) CPU E5-2620 v3 sysctl is tuned along with other Kernel tunings. |
Sorry for being a pain @SChernykh . One last concern. My only reason for lowering the custom diff to 1000 was that my miners are often offline and online. So what if they are assigned a high diff job and in the process of calculating the share. if They get offline and shutdown or timed out. In that scenario where there are a lof of such miners. Shouldn't we low the diff to lowest value so that we can get maximum work from miners when they are online? |
No, this is not how mining works. Otherwise no one would be able to mine a block because no one is able to submit a 300G difficulty share within a few hours they're online. Finding a share is a random memory-less process, and the law of big numbers applies here. Many big miners = 1 big miner with the same hashrate for all practical purposes. |
Thank you so much. I have made the changes as per your suggestion and will monitor it for few days to see if proxy crashes for high memory. I hope it should not crash now. Secondly when should I decide that now is that time to add a parallel load balancer server with proxy? What should be the maximum / optimal number of workers for a single Ubuntu Server with such specs? Intel(R) Xeon(R) CPU E5-2620 v3 |
Can you directly add an automatic memory release function in the software?
The text was updated successfully, but these errors were encountered: