You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
With Micrometer, each timer is written to memory and retained. This means it could be a sort of memory leak, so you need to constrain the tag cardinality. (A good rule of thumb is 100 per metric)
If a ktor application with Micrometer is hit and triggers a 404, it creates a new metric for the new url. This can cause memory usage to climb until the application runs out of memory.
A meter filter can fix it as a work around (unifying all 404 request as a 404 route tag):
val appMicrometerRegistry =SimpleMeterRegistry().apply {
config().meterFilter(Filter404())
}
However, it would be great to proactively group all 404 meters into the tag., so others don't need to encounter this. Would a PR with that functionality be worthwhile?
The text was updated successfully, but these errors were encountered:
Unfortunately, I cannot reproduce the potential OOM problem. Can you please share a code snippet for the server to reproduce the problem by making many requests leading to a 404 status code?
With Micrometer, each timer is written to memory and retained. This means it could be a sort of memory leak, so you need to constrain the tag cardinality. (A good rule of thumb is 100 per metric)
If a ktor application with Micrometer is hit and triggers a 404, it creates a new metric for the new url. This can cause memory usage to climb until the application runs out of memory.
A meter filter can fix it as a work around (unifying all 404 request as a
404
route tag):Add it to the registry being used:
However, it would be great to proactively group all 404 meters into the tag., so others don't need to encounter this. Would a PR with that functionality be worthwhile?
The text was updated successfully, but these errors were encountered: