Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow configuration of or dynamic WARN_THRESHOLD #1118

Open
jessicaaustin opened this issue Mar 13, 2020 · 3 comments
Open

Allow configuration of or dynamic WARN_THRESHOLD #1118

jessicaaustin opened this issue Mar 13, 2020 · 3 comments

Comments

@jessicaaustin
Copy link

We are Terracotta to cache objects that range from <1MB to 100MB, on a server that has 96GB of memory total. For large objects, we get warning messages like

com.tc.bytes.TCByteBufferFactory : Asking for a large amount of memory: 81832566 bytes
com.tc.net.core.TCConnection : Warning: Attempting to send a message (com.tc.entity.LinearVoltronEntityMultiResponse) of size 81832559 bytes

Looking at the code, it seems like these warning thresholds are hard-coded and have not changed for 7+ years: TCConnectionImpl.java#L76 and TCByteBufferFactory.java#L41. The amount of memory available to a typical server has increased quite a bit during this time.

So I have two questions:

  1. Is it appropriate to store objects of this size? (assuming we have the caches set up properly and running on a server with plenty of memory resources)
  2. If so, does it make sense to make this threshold dynamic? For example if you try to store an object that is over X% of the cache size. Or at least could this warning threshold be configured by the user to avoid a flood of warning messages?

I searched the issues here and the ehcache-users group and couldn't find anything related. Please let me know if there is a better place to ask this question. And of course I'm happy to post our full config if that's useful. Thank you!

@chrisdennis
Copy link
Member

The issue isn't so much how big the item is relative to available memory, but more how much load you're pushing through the machine - and ultimately how much young gen garbage you are creating. If you have 100MB values and the server ends up handling say 16 of them at the same time thats a lot of heap. That's at least the origin of the warning message. So if you're dealing with low write rates (and low client-side caching churn), or can provision enough heap in clients and servers to cope with momentary spikes in heap usage (and the associated GC activity) then everything should be fine. In that sense I can see the advantage in making the threshold tunable... but I don't think a %-age of cache size make sense.

tl;dr;

  1. If it behaves okay in testing - then it is okay.
  2. If it's behaving okay, and the logging is annoying then I think we should support changing the thresholds.

@jessicaaustin
Copy link
Author

Thank you, this is helpful background information. And it definitely makes sense. Sounds like it could be a percentage of java resources perhaps?

In our case, we were experiencing some performance issues related to an increase in traffic, so the first thing I did was increase the TTL (reduce churn) to get immediate relief. Next step is tuning java opts to make sure java could actually take advantage of the resources on the server. Since this message always shows up in our case no matter whether things are working or not, I've just filtered it out of the logs.

I could imagine it would be useful if it this warning were dynamic and actually provided some indication of getting close to whatever resources are available. That said, it's just a nice-to-have since if you start hitting those limits it'll show up with other metrics like increased latency, increased miss/hit ratio, slower load times, etc (these are the metrics that triggered alerts for us). So I'm not sure whether or not to close this issue, but at this point the explanation is sufficient for us, and hopefully will help others with similar problems. Thank you!

@srstsavage
Copy link

+1 would be nice to be able to tune this

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants