-
-
Notifications
You must be signed in to change notification settings - Fork 81
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High memory usage in l4 matcher #284
Comments
I wonder if @metafeather would be able to look into it 😃 |
Actually, a closer look at that profile tree shows that the memory sum is already over 99% -- do you have the rest of the tree? Maybe try |
Hmm, well, maybe not so much a leak as it is just lack of pooled buffers. @metafeather Any interest in using sync.Pool? @divyam234 How many of your connections are Postgres? What if you reorder the matchers so it's not first? |
Reordering results in same result.Its only fixed after removing postgres block.During profiling there were no postgres connection. |
Busy server, I'm guessing? My first thought would be we could try pooling the buffers in that matcher. |
If we look closer on the I wonder if we can rewrite this matcher to consume fewer bytes or have fewer loops. There is also a for loop here that doesn't look good to me at first sight, especially given the fact this matcher returns true if at least one "startup parameter" has been found. But I don't have any postgres setup to test it properly. |
OH. I didn't realize that at a quick glance. Duh. Yeah, we can probably do better here... (I also don't really use Postgres.) @metafeather Any ideas on how we can improve this? |
Postgres matcher is taking very high memory even when I am not using it which eventually kills caddy due to OOM. During profiling I have called only imgproxy with 20 concurrent requests .
The text was updated successfully, but these errors were encountered: